00:00:00.001 Started by upstream project "autotest-per-patch" build number 132691 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.145 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.146 The recommended git tool is: git 00:00:00.146 using credential 00000000-0000-0000-0000-000000000002 00:00:00.149 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.187 Fetching changes from the remote Git repository 00:00:00.194 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.223 Using shallow fetch with depth 1 00:00:00.223 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.223 > git --version # timeout=10 00:00:00.249 > git --version # 'git version 2.39.2' 00:00:00.249 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.272 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.272 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.166 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.176 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.187 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.187 > git config core.sparsecheckout # timeout=10 00:00:06.197 > git read-tree -mu HEAD # timeout=10 00:00:06.212 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.234 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.234 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.327 [Pipeline] Start of Pipeline 00:00:06.342 [Pipeline] library 00:00:06.343 Loading library shm_lib@master 00:00:06.343 Library shm_lib@master is cached. Copying from home. 00:00:06.358 [Pipeline] node 00:00:06.369 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:06.371 [Pipeline] { 00:00:06.377 [Pipeline] catchError 00:00:06.378 [Pipeline] { 00:00:06.386 [Pipeline] wrap 00:00:06.391 [Pipeline] { 00:00:06.396 [Pipeline] stage 00:00:06.397 [Pipeline] { (Prologue) 00:00:06.407 [Pipeline] echo 00:00:06.408 Node: VM-host-SM0 00:00:06.412 [Pipeline] cleanWs 00:00:06.418 [WS-CLEANUP] Deleting project workspace... 00:00:06.418 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.424 [WS-CLEANUP] done 00:00:06.613 [Pipeline] setCustomBuildProperty 00:00:06.672 [Pipeline] httpRequest 00:00:07.048 [Pipeline] echo 00:00:07.050 Sorcerer 10.211.164.20 is alive 00:00:07.059 [Pipeline] retry 00:00:07.061 [Pipeline] { 00:00:07.073 [Pipeline] httpRequest 00:00:07.076 HttpMethod: GET 00:00:07.077 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.078 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.080 Response Code: HTTP/1.1 200 OK 00:00:07.080 Success: Status code 200 is in the accepted range: 200,404 00:00:07.081 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.875 [Pipeline] } 00:00:07.891 [Pipeline] // retry 00:00:07.897 [Pipeline] sh 00:00:08.177 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.190 [Pipeline] httpRequest 00:00:08.620 [Pipeline] echo 00:00:08.622 Sorcerer 10.211.164.20 is alive 00:00:08.630 [Pipeline] retry 00:00:08.633 [Pipeline] { 00:00:08.647 [Pipeline] httpRequest 00:00:08.652 HttpMethod: GET 00:00:08.652 URL: http://10.211.164.20/packages/spdk_85bc1e85ab0983b7f3814bad50dcfad2df551836.tar.gz 00:00:08.653 Sending request to url: http://10.211.164.20/packages/spdk_85bc1e85ab0983b7f3814bad50dcfad2df551836.tar.gz 00:00:08.663 Response Code: HTTP/1.1 200 OK 00:00:08.664 Success: Status code 200 is in the accepted range: 200,404 00:00:08.664 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_85bc1e85ab0983b7f3814bad50dcfad2df551836.tar.gz 00:00:27.360 [Pipeline] } 00:00:27.378 [Pipeline] // retry 00:00:27.385 [Pipeline] sh 00:00:27.668 + tar --no-same-owner -xf spdk_85bc1e85ab0983b7f3814bad50dcfad2df551836.tar.gz 00:00:30.978 [Pipeline] sh 00:00:31.255 + git -C spdk log --oneline -n5 00:00:31.255 85bc1e85a lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata) 00:00:31.255 bb633fc85 lib/reduce: Support storing metadata on backing dev. (3 of 5, reload process) 00:00:31.255 4985835f7 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:00:31.255 b4d3c8f7d lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:00:31.255 3031b0f5f lib/reduce: Delete logic of persisting old chunk map 00:00:31.271 [Pipeline] writeFile 00:00:31.283 [Pipeline] sh 00:00:31.563 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:31.573 [Pipeline] sh 00:00:31.853 + cat autorun-spdk.conf 00:00:31.853 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.853 SPDK_TEST_NVMF=1 00:00:31.853 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.853 SPDK_TEST_USDT=1 00:00:31.853 SPDK_TEST_NVMF_MDNS=1 00:00:31.853 SPDK_RUN_UBSAN=1 00:00:31.853 NET_TYPE=virt 00:00:31.853 SPDK_JSONRPC_GO_CLIENT=1 00:00:31.853 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.860 RUN_NIGHTLY=0 00:00:31.862 [Pipeline] } 00:00:31.875 [Pipeline] // stage 00:00:31.889 [Pipeline] stage 00:00:31.892 [Pipeline] { (Run VM) 00:00:31.904 [Pipeline] sh 00:00:32.181 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:32.181 + echo 'Start stage prepare_nvme.sh' 00:00:32.181 Start stage prepare_nvme.sh 00:00:32.181 + [[ -n 6 ]] 00:00:32.181 + disk_prefix=ex6 00:00:32.181 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:00:32.181 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:00:32.181 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:00:32.181 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.181 ++ SPDK_TEST_NVMF=1 00:00:32.181 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:32.181 ++ SPDK_TEST_USDT=1 00:00:32.181 ++ SPDK_TEST_NVMF_MDNS=1 00:00:32.181 ++ SPDK_RUN_UBSAN=1 00:00:32.181 ++ NET_TYPE=virt 00:00:32.181 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:32.181 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.181 ++ RUN_NIGHTLY=0 00:00:32.181 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:32.181 + nvme_files=() 00:00:32.181 + declare -A nvme_files 00:00:32.181 + backend_dir=/var/lib/libvirt/images/backends 00:00:32.181 + nvme_files['nvme.img']=5G 00:00:32.181 + nvme_files['nvme-cmb.img']=5G 00:00:32.181 + nvme_files['nvme-multi0.img']=4G 00:00:32.181 + nvme_files['nvme-multi1.img']=4G 00:00:32.181 + nvme_files['nvme-multi2.img']=4G 00:00:32.181 + nvme_files['nvme-openstack.img']=8G 00:00:32.181 + nvme_files['nvme-zns.img']=5G 00:00:32.181 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:32.181 + (( SPDK_TEST_FTL == 1 )) 00:00:32.181 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:32.181 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:32.181 + for nvme in "${!nvme_files[@]}" 00:00:32.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:32.181 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.181 + for nvme in "${!nvme_files[@]}" 00:00:32.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:32.181 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.181 + for nvme in "${!nvme_files[@]}" 00:00:32.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:32.181 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:32.181 + for nvme in "${!nvme_files[@]}" 00:00:32.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:32.181 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.181 + for nvme in "${!nvme_files[@]}" 00:00:32.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:32.181 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.181 + for nvme in "${!nvme_files[@]}" 00:00:32.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:32.181 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.181 + for nvme in "${!nvme_files[@]}" 00:00:32.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:32.440 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.440 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:32.440 + echo 'End stage prepare_nvme.sh' 00:00:32.440 End stage prepare_nvme.sh 00:00:32.451 [Pipeline] sh 00:00:32.732 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:32.732 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:00:32.732 00:00:32.732 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:00:32.732 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:00:32.732 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:32.732 HELP=0 00:00:32.732 DRY_RUN=0 00:00:32.732 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:00:32.732 NVME_DISKS_TYPE=nvme,nvme, 00:00:32.732 NVME_AUTO_CREATE=0 00:00:32.732 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:00:32.732 NVME_CMB=,, 00:00:32.732 NVME_PMR=,, 00:00:32.732 NVME_ZNS=,, 00:00:32.732 NVME_MS=,, 00:00:32.732 NVME_FDP=,, 00:00:32.732 SPDK_VAGRANT_DISTRO=fedora39 00:00:32.732 SPDK_VAGRANT_VMCPU=10 00:00:32.732 SPDK_VAGRANT_VMRAM=12288 00:00:32.732 SPDK_VAGRANT_PROVIDER=libvirt 00:00:32.732 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:32.732 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:32.732 SPDK_OPENSTACK_NETWORK=0 00:00:32.732 VAGRANT_PACKAGE_BOX=0 00:00:32.732 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:32.732 FORCE_DISTRO=true 00:00:32.732 VAGRANT_BOX_VERSION= 00:00:32.732 EXTRA_VAGRANTFILES= 00:00:32.732 NIC_MODEL=e1000 00:00:32.732 00:00:32.732 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:00:32.732 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:36.016 Bringing machine 'default' up with 'libvirt' provider... 00:00:36.273 ==> default: Creating image (snapshot of base box volume). 00:00:36.532 ==> default: Creating domain with the following settings... 00:00:36.532 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733400246_584f55ab3b71fcdf74c0 00:00:36.532 ==> default: -- Domain type: kvm 00:00:36.532 ==> default: -- Cpus: 10 00:00:36.532 ==> default: -- Feature: acpi 00:00:36.532 ==> default: -- Feature: apic 00:00:36.532 ==> default: -- Feature: pae 00:00:36.532 ==> default: -- Memory: 12288M 00:00:36.532 ==> default: -- Memory Backing: hugepages: 00:00:36.532 ==> default: -- Management MAC: 00:00:36.532 ==> default: -- Loader: 00:00:36.532 ==> default: -- Nvram: 00:00:36.532 ==> default: -- Base box: spdk/fedora39 00:00:36.532 ==> default: -- Storage pool: default 00:00:36.532 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733400246_584f55ab3b71fcdf74c0.img (20G) 00:00:36.532 ==> default: -- Volume Cache: default 00:00:36.532 ==> default: -- Kernel: 00:00:36.532 ==> default: -- Initrd: 00:00:36.532 ==> default: -- Graphics Type: vnc 00:00:36.532 ==> default: -- Graphics Port: -1 00:00:36.532 ==> default: -- Graphics IP: 127.0.0.1 00:00:36.532 ==> default: -- Graphics Password: Not defined 00:00:36.532 ==> default: -- Video Type: cirrus 00:00:36.532 ==> default: -- Video VRAM: 9216 00:00:36.532 ==> default: -- Sound Type: 00:00:36.532 ==> default: -- Keymap: en-us 00:00:36.532 ==> default: -- TPM Path: 00:00:36.532 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:36.532 ==> default: -- Command line args: 00:00:36.532 ==> default: -> value=-device, 00:00:36.532 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:36.532 ==> default: -> value=-drive, 00:00:36.532 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:00:36.532 ==> default: -> value=-device, 00:00:36.532 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.532 ==> default: -> value=-device, 00:00:36.532 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:36.532 ==> default: -> value=-drive, 00:00:36.532 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:36.532 ==> default: -> value=-device, 00:00:36.532 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.532 ==> default: -> value=-drive, 00:00:36.532 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:36.532 ==> default: -> value=-device, 00:00:36.532 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.532 ==> default: -> value=-drive, 00:00:36.532 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:36.532 ==> default: -> value=-device, 00:00:36.532 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.790 ==> default: Creating shared folders metadata... 00:00:36.790 ==> default: Starting domain. 00:00:38.695 ==> default: Waiting for domain to get an IP address... 00:00:56.786 ==> default: Waiting for SSH to become available... 00:00:56.786 ==> default: Configuring and enabling network interfaces... 00:01:00.067 default: SSH address: 192.168.121.116:22 00:01:00.067 default: SSH username: vagrant 00:01:00.067 default: SSH auth method: private key 00:01:01.969 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:10.095 ==> default: Mounting SSHFS shared folder... 00:01:11.473 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:11.473 ==> default: Checking Mount.. 00:01:12.849 ==> default: Folder Successfully Mounted! 00:01:12.849 ==> default: Running provisioner: file... 00:01:13.417 default: ~/.gitconfig => .gitconfig 00:01:13.984 00:01:13.984 SUCCESS! 00:01:13.984 00:01:13.984 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:13.984 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:13.984 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:13.984 00:01:13.993 [Pipeline] } 00:01:14.008 [Pipeline] // stage 00:01:14.017 [Pipeline] dir 00:01:14.018 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:01:14.020 [Pipeline] { 00:01:14.032 [Pipeline] catchError 00:01:14.034 [Pipeline] { 00:01:14.047 [Pipeline] sh 00:01:14.326 + vagrant ssh-config --host vagrant 00:01:14.326 + sed -ne /^Host/,$p 00:01:14.326 + tee ssh_conf 00:01:17.609 Host vagrant 00:01:17.609 HostName 192.168.121.116 00:01:17.609 User vagrant 00:01:17.609 Port 22 00:01:17.609 UserKnownHostsFile /dev/null 00:01:17.609 StrictHostKeyChecking no 00:01:17.609 PasswordAuthentication no 00:01:17.609 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:17.609 IdentitiesOnly yes 00:01:17.609 LogLevel FATAL 00:01:17.609 ForwardAgent yes 00:01:17.609 ForwardX11 yes 00:01:17.609 00:01:17.623 [Pipeline] withEnv 00:01:17.626 [Pipeline] { 00:01:17.639 [Pipeline] sh 00:01:17.918 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:17.918 source /etc/os-release 00:01:17.918 [[ -e /image.version ]] && img=$(< /image.version) 00:01:17.918 # Minimal, systemd-like check. 00:01:17.918 if [[ -e /.dockerenv ]]; then 00:01:17.918 # Clear garbage from the node's name: 00:01:17.918 # agt-er_autotest_547-896 -> autotest_547-896 00:01:17.918 # $HOSTNAME is the actual container id 00:01:17.918 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:17.918 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:17.918 # We can assume this is a mount from a host where container is running, 00:01:17.918 # so fetch its hostname to easily identify the target swarm worker. 00:01:17.918 container="$(< /etc/hostname) ($agent)" 00:01:17.918 else 00:01:17.918 # Fallback 00:01:17.918 container=$agent 00:01:17.918 fi 00:01:17.918 fi 00:01:17.918 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:17.918 00:01:18.186 [Pipeline] } 00:01:18.203 [Pipeline] // withEnv 00:01:18.211 [Pipeline] setCustomBuildProperty 00:01:18.226 [Pipeline] stage 00:01:18.228 [Pipeline] { (Tests) 00:01:18.244 [Pipeline] sh 00:01:18.525 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:18.796 [Pipeline] sh 00:01:19.077 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:19.349 [Pipeline] timeout 00:01:19.350 Timeout set to expire in 1 hr 0 min 00:01:19.351 [Pipeline] { 00:01:19.366 [Pipeline] sh 00:01:19.645 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:20.212 HEAD is now at 85bc1e85a lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata) 00:01:20.222 [Pipeline] sh 00:01:20.519 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:20.805 [Pipeline] sh 00:01:21.085 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:21.405 [Pipeline] sh 00:01:21.684 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:21.943 ++ readlink -f spdk_repo 00:01:21.943 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:21.943 + [[ -n /home/vagrant/spdk_repo ]] 00:01:21.943 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:21.943 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:21.943 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:21.943 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:21.943 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:21.943 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:21.943 + cd /home/vagrant/spdk_repo 00:01:21.943 + source /etc/os-release 00:01:21.943 ++ NAME='Fedora Linux' 00:01:21.943 ++ VERSION='39 (Cloud Edition)' 00:01:21.943 ++ ID=fedora 00:01:21.943 ++ VERSION_ID=39 00:01:21.943 ++ VERSION_CODENAME= 00:01:21.943 ++ PLATFORM_ID=platform:f39 00:01:21.943 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:21.943 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.943 ++ LOGO=fedora-logo-icon 00:01:21.943 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:21.943 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.943 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:21.943 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.943 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.943 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.943 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:21.943 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.943 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:21.943 ++ SUPPORT_END=2024-11-12 00:01:21.943 ++ VARIANT='Cloud Edition' 00:01:21.943 ++ VARIANT_ID=cloud 00:01:21.943 + uname -a 00:01:21.943 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:21.943 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:22.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:22.511 Hugepages 00:01:22.511 node hugesize free / total 00:01:22.511 node0 1048576kB 0 / 0 00:01:22.511 node0 2048kB 0 / 0 00:01:22.511 00:01:22.511 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.511 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:22.511 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:22.511 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:22.511 + rm -f /tmp/spdk-ld-path 00:01:22.511 + source autorun-spdk.conf 00:01:22.511 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.511 ++ SPDK_TEST_NVMF=1 00:01:22.511 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.511 ++ SPDK_TEST_USDT=1 00:01:22.511 ++ SPDK_TEST_NVMF_MDNS=1 00:01:22.511 ++ SPDK_RUN_UBSAN=1 00:01:22.511 ++ NET_TYPE=virt 00:01:22.511 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:22.511 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.511 ++ RUN_NIGHTLY=0 00:01:22.511 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.511 + [[ -n '' ]] 00:01:22.511 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:22.511 + for M in /var/spdk/build-*-manifest.txt 00:01:22.511 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:22.511 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.511 + for M in /var/spdk/build-*-manifest.txt 00:01:22.511 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.511 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.511 + for M in /var/spdk/build-*-manifest.txt 00:01:22.511 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.511 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.511 ++ uname 00:01:22.511 + [[ Linux == \L\i\n\u\x ]] 00:01:22.511 + sudo dmesg -T 00:01:22.511 + sudo dmesg --clear 00:01:22.511 + dmesg_pid=5263 00:01:22.511 + [[ Fedora Linux == FreeBSD ]] 00:01:22.511 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.511 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.511 + sudo dmesg -Tw 00:01:22.511 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.511 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.511 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.511 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.511 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.511 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.511 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.511 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.511 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.511 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.511 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.511 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.511 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.771 12:04:53 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:22.771 12:04:53 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.771 12:04:53 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.771 12:04:53 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:22.771 12:04:53 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.771 12:04:53 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:01:22.771 12:04:53 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:01:22.771 12:04:53 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:22.771 12:04:53 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:22.771 12:04:53 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:01:22.771 12:04:53 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.771 12:04:53 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:22.771 12:04:53 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:22.771 12:04:53 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.771 12:04:53 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:22.771 12:04:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:22.771 12:04:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:22.771 12:04:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.771 12:04:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.771 12:04:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.771 12:04:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.771 12:04:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.771 12:04:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.771 12:04:53 -- paths/export.sh@5 -- $ export PATH 00:01:22.771 12:04:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.771 12:04:53 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:22.771 12:04:53 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:22.771 12:04:53 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733400293.XXXXXX 00:01:22.771 12:04:53 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733400293.nRxlLB 00:01:22.771 12:04:53 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:22.771 12:04:53 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:22.771 12:04:53 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:22.771 12:04:53 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:22.771 12:04:53 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.771 12:04:53 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:22.771 12:04:53 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:22.771 12:04:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.771 12:04:53 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:01:22.771 12:04:53 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:22.771 12:04:53 -- pm/common@17 -- $ local monitor 00:01:22.772 12:04:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.772 12:04:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.772 12:04:53 -- pm/common@21 -- $ date +%s 00:01:22.772 12:04:53 -- pm/common@25 -- $ sleep 1 00:01:22.772 12:04:53 -- pm/common@21 -- $ date +%s 00:01:22.772 12:04:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733400293 00:01:22.772 12:04:53 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733400293 00:01:22.772 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733400293_collect-vmstat.pm.log 00:01:22.772 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733400293_collect-cpu-load.pm.log 00:01:23.709 12:04:54 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:23.709 12:04:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.709 12:04:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.709 12:04:54 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:23.709 12:04:54 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.709 Thu Dec 5 12:04:54 PM UTC 2024 00:01:23.709 12:04:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.709 v25.01-pre-286-g85bc1e85a 00:01:23.709 12:04:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:23.709 12:04:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.709 12:04:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.709 12:04:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:23.709 12:04:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:23.709 12:04:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.709 ************************************ 00:01:23.709 START TEST ubsan 00:01:23.709 ************************************ 00:01:23.709 using ubsan 00:01:23.709 12:04:54 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:23.709 00:01:23.709 real 0m0.000s 00:01:23.709 user 0m0.000s 00:01:23.709 sys 0m0.000s 00:01:23.709 12:04:54 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:23.709 12:04:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.709 ************************************ 00:01:23.709 END TEST ubsan 00:01:23.709 ************************************ 00:01:23.709 12:04:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:23.709 12:04:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:23.709 12:04:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:23.709 12:04:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:23.709 12:04:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:23.709 12:04:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:23.709 12:04:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:23.709 12:04:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:23.709 12:04:54 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:01:23.967 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:23.967 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:24.534 Using 'verbs' RDMA provider 00:01:40.348 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:52.554 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:52.554 go version go1.21.1 linux/amd64 00:01:52.554 Creating mk/config.mk...done. 00:01:52.554 Creating mk/cc.flags.mk...done. 00:01:52.554 Type 'make' to build. 00:01:52.554 12:05:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:52.554 12:05:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:52.554 12:05:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:52.554 12:05:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.554 ************************************ 00:01:52.554 START TEST make 00:01:52.554 ************************************ 00:01:52.554 12:05:23 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:52.812 make[1]: Nothing to be done for 'all'. 00:02:07.684 The Meson build system 00:02:07.684 Version: 1.5.0 00:02:07.684 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:07.684 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:07.684 Build type: native build 00:02:07.684 Program cat found: YES (/usr/bin/cat) 00:02:07.684 Project name: DPDK 00:02:07.684 Project version: 24.03.0 00:02:07.684 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:07.684 C linker for the host machine: cc ld.bfd 2.40-14 00:02:07.684 Host machine cpu family: x86_64 00:02:07.684 Host machine cpu: x86_64 00:02:07.684 Message: ## Building in Developer Mode ## 00:02:07.684 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:07.684 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:07.684 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:07.684 Program python3 found: YES (/usr/bin/python3) 00:02:07.684 Program cat found: YES (/usr/bin/cat) 00:02:07.684 Compiler for C supports arguments -march=native: YES 00:02:07.684 Checking for size of "void *" : 8 00:02:07.684 Checking for size of "void *" : 8 (cached) 00:02:07.684 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:07.684 Library m found: YES 00:02:07.684 Library numa found: YES 00:02:07.684 Has header "numaif.h" : YES 00:02:07.684 Library fdt found: NO 00:02:07.684 Library execinfo found: NO 00:02:07.684 Has header "execinfo.h" : YES 00:02:07.684 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:07.684 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:07.684 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:07.684 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:07.684 Run-time dependency openssl found: YES 3.1.1 00:02:07.684 Run-time dependency libpcap found: YES 1.10.4 00:02:07.684 Has header "pcap.h" with dependency libpcap: YES 00:02:07.684 Compiler for C supports arguments -Wcast-qual: YES 00:02:07.684 Compiler for C supports arguments -Wdeprecated: YES 00:02:07.684 Compiler for C supports arguments -Wformat: YES 00:02:07.684 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:07.684 Compiler for C supports arguments -Wformat-security: NO 00:02:07.684 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.684 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:07.684 Compiler for C supports arguments -Wnested-externs: YES 00:02:07.684 Compiler for C supports arguments -Wold-style-definition: YES 00:02:07.684 Compiler for C supports arguments -Wpointer-arith: YES 00:02:07.684 Compiler for C supports arguments -Wsign-compare: YES 00:02:07.684 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:07.684 Compiler for C supports arguments -Wundef: YES 00:02:07.684 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.684 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:07.684 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:07.684 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.684 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:07.684 Program objdump found: YES (/usr/bin/objdump) 00:02:07.685 Compiler for C supports arguments -mavx512f: YES 00:02:07.685 Checking if "AVX512 checking" compiles: YES 00:02:07.685 Fetching value of define "__SSE4_2__" : 1 00:02:07.685 Fetching value of define "__AES__" : 1 00:02:07.685 Fetching value of define "__AVX__" : 1 00:02:07.685 Fetching value of define "__AVX2__" : 1 00:02:07.685 Fetching value of define "__AVX512BW__" : (undefined) 00:02:07.685 Fetching value of define "__AVX512CD__" : (undefined) 00:02:07.685 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:07.685 Fetching value of define "__AVX512F__" : (undefined) 00:02:07.685 Fetching value of define "__AVX512VL__" : (undefined) 00:02:07.685 Fetching value of define "__PCLMUL__" : 1 00:02:07.685 Fetching value of define "__RDRND__" : 1 00:02:07.685 Fetching value of define "__RDSEED__" : 1 00:02:07.685 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:07.685 Fetching value of define "__znver1__" : (undefined) 00:02:07.685 Fetching value of define "__znver2__" : (undefined) 00:02:07.685 Fetching value of define "__znver3__" : (undefined) 00:02:07.685 Fetching value of define "__znver4__" : (undefined) 00:02:07.685 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:07.685 Message: lib/log: Defining dependency "log" 00:02:07.685 Message: lib/kvargs: Defining dependency "kvargs" 00:02:07.685 Message: lib/telemetry: Defining dependency "telemetry" 00:02:07.685 Checking for function "getentropy" : NO 00:02:07.685 Message: lib/eal: Defining dependency "eal" 00:02:07.685 Message: lib/ring: Defining dependency "ring" 00:02:07.685 Message: lib/rcu: Defining dependency "rcu" 00:02:07.685 Message: lib/mempool: Defining dependency "mempool" 00:02:07.685 Message: lib/mbuf: Defining dependency "mbuf" 00:02:07.685 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:07.685 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:07.685 Compiler for C supports arguments -mpclmul: YES 00:02:07.685 Compiler for C supports arguments -maes: YES 00:02:07.685 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.685 Compiler for C supports arguments -mavx512bw: YES 00:02:07.685 Compiler for C supports arguments -mavx512dq: YES 00:02:07.685 Compiler for C supports arguments -mavx512vl: YES 00:02:07.685 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:07.685 Compiler for C supports arguments -mavx2: YES 00:02:07.685 Compiler for C supports arguments -mavx: YES 00:02:07.685 Message: lib/net: Defining dependency "net" 00:02:07.685 Message: lib/meter: Defining dependency "meter" 00:02:07.685 Message: lib/ethdev: Defining dependency "ethdev" 00:02:07.685 Message: lib/pci: Defining dependency "pci" 00:02:07.685 Message: lib/cmdline: Defining dependency "cmdline" 00:02:07.685 Message: lib/hash: Defining dependency "hash" 00:02:07.685 Message: lib/timer: Defining dependency "timer" 00:02:07.685 Message: lib/compressdev: Defining dependency "compressdev" 00:02:07.685 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:07.685 Message: lib/dmadev: Defining dependency "dmadev" 00:02:07.685 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:07.685 Message: lib/power: Defining dependency "power" 00:02:07.685 Message: lib/reorder: Defining dependency "reorder" 00:02:07.685 Message: lib/security: Defining dependency "security" 00:02:07.685 Has header "linux/userfaultfd.h" : YES 00:02:07.685 Has header "linux/vduse.h" : YES 00:02:07.685 Message: lib/vhost: Defining dependency "vhost" 00:02:07.685 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:07.685 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:07.685 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:07.685 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:07.685 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:07.685 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:07.685 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:07.685 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:07.685 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:07.685 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:07.685 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:07.685 Configuring doxy-api-html.conf using configuration 00:02:07.685 Configuring doxy-api-man.conf using configuration 00:02:07.685 Program mandb found: YES (/usr/bin/mandb) 00:02:07.685 Program sphinx-build found: NO 00:02:07.685 Configuring rte_build_config.h using configuration 00:02:07.685 Message: 00:02:07.685 ================= 00:02:07.685 Applications Enabled 00:02:07.685 ================= 00:02:07.685 00:02:07.685 apps: 00:02:07.685 00:02:07.685 00:02:07.685 Message: 00:02:07.685 ================= 00:02:07.685 Libraries Enabled 00:02:07.685 ================= 00:02:07.685 00:02:07.685 libs: 00:02:07.685 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:07.685 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:07.685 cryptodev, dmadev, power, reorder, security, vhost, 00:02:07.685 00:02:07.685 Message: 00:02:07.685 =============== 00:02:07.685 Drivers Enabled 00:02:07.685 =============== 00:02:07.685 00:02:07.685 common: 00:02:07.685 00:02:07.685 bus: 00:02:07.685 pci, vdev, 00:02:07.685 mempool: 00:02:07.685 ring, 00:02:07.685 dma: 00:02:07.685 00:02:07.685 net: 00:02:07.685 00:02:07.685 crypto: 00:02:07.685 00:02:07.685 compress: 00:02:07.685 00:02:07.685 vdpa: 00:02:07.685 00:02:07.685 00:02:07.685 Message: 00:02:07.685 ================= 00:02:07.685 Content Skipped 00:02:07.685 ================= 00:02:07.685 00:02:07.685 apps: 00:02:07.685 dumpcap: explicitly disabled via build config 00:02:07.685 graph: explicitly disabled via build config 00:02:07.685 pdump: explicitly disabled via build config 00:02:07.685 proc-info: explicitly disabled via build config 00:02:07.685 test-acl: explicitly disabled via build config 00:02:07.685 test-bbdev: explicitly disabled via build config 00:02:07.685 test-cmdline: explicitly disabled via build config 00:02:07.685 test-compress-perf: explicitly disabled via build config 00:02:07.685 test-crypto-perf: explicitly disabled via build config 00:02:07.685 test-dma-perf: explicitly disabled via build config 00:02:07.685 test-eventdev: explicitly disabled via build config 00:02:07.685 test-fib: explicitly disabled via build config 00:02:07.685 test-flow-perf: explicitly disabled via build config 00:02:07.685 test-gpudev: explicitly disabled via build config 00:02:07.685 test-mldev: explicitly disabled via build config 00:02:07.685 test-pipeline: explicitly disabled via build config 00:02:07.685 test-pmd: explicitly disabled via build config 00:02:07.685 test-regex: explicitly disabled via build config 00:02:07.685 test-sad: explicitly disabled via build config 00:02:07.685 test-security-perf: explicitly disabled via build config 00:02:07.685 00:02:07.685 libs: 00:02:07.685 argparse: explicitly disabled via build config 00:02:07.685 metrics: explicitly disabled via build config 00:02:07.685 acl: explicitly disabled via build config 00:02:07.685 bbdev: explicitly disabled via build config 00:02:07.685 bitratestats: explicitly disabled via build config 00:02:07.685 bpf: explicitly disabled via build config 00:02:07.685 cfgfile: explicitly disabled via build config 00:02:07.685 distributor: explicitly disabled via build config 00:02:07.685 efd: explicitly disabled via build config 00:02:07.685 eventdev: explicitly disabled via build config 00:02:07.685 dispatcher: explicitly disabled via build config 00:02:07.685 gpudev: explicitly disabled via build config 00:02:07.685 gro: explicitly disabled via build config 00:02:07.685 gso: explicitly disabled via build config 00:02:07.685 ip_frag: explicitly disabled via build config 00:02:07.685 jobstats: explicitly disabled via build config 00:02:07.685 latencystats: explicitly disabled via build config 00:02:07.685 lpm: explicitly disabled via build config 00:02:07.685 member: explicitly disabled via build config 00:02:07.685 pcapng: explicitly disabled via build config 00:02:07.685 rawdev: explicitly disabled via build config 00:02:07.685 regexdev: explicitly disabled via build config 00:02:07.685 mldev: explicitly disabled via build config 00:02:07.685 rib: explicitly disabled via build config 00:02:07.685 sched: explicitly disabled via build config 00:02:07.685 stack: explicitly disabled via build config 00:02:07.685 ipsec: explicitly disabled via build config 00:02:07.685 pdcp: explicitly disabled via build config 00:02:07.685 fib: explicitly disabled via build config 00:02:07.685 port: explicitly disabled via build config 00:02:07.685 pdump: explicitly disabled via build config 00:02:07.685 table: explicitly disabled via build config 00:02:07.685 pipeline: explicitly disabled via build config 00:02:07.685 graph: explicitly disabled via build config 00:02:07.685 node: explicitly disabled via build config 00:02:07.685 00:02:07.685 drivers: 00:02:07.685 common/cpt: not in enabled drivers build config 00:02:07.685 common/dpaax: not in enabled drivers build config 00:02:07.685 common/iavf: not in enabled drivers build config 00:02:07.685 common/idpf: not in enabled drivers build config 00:02:07.685 common/ionic: not in enabled drivers build config 00:02:07.685 common/mvep: not in enabled drivers build config 00:02:07.685 common/octeontx: not in enabled drivers build config 00:02:07.685 bus/auxiliary: not in enabled drivers build config 00:02:07.685 bus/cdx: not in enabled drivers build config 00:02:07.685 bus/dpaa: not in enabled drivers build config 00:02:07.685 bus/fslmc: not in enabled drivers build config 00:02:07.685 bus/ifpga: not in enabled drivers build config 00:02:07.685 bus/platform: not in enabled drivers build config 00:02:07.685 bus/uacce: not in enabled drivers build config 00:02:07.685 bus/vmbus: not in enabled drivers build config 00:02:07.685 common/cnxk: not in enabled drivers build config 00:02:07.685 common/mlx5: not in enabled drivers build config 00:02:07.685 common/nfp: not in enabled drivers build config 00:02:07.685 common/nitrox: not in enabled drivers build config 00:02:07.685 common/qat: not in enabled drivers build config 00:02:07.685 common/sfc_efx: not in enabled drivers build config 00:02:07.685 mempool/bucket: not in enabled drivers build config 00:02:07.685 mempool/cnxk: not in enabled drivers build config 00:02:07.685 mempool/dpaa: not in enabled drivers build config 00:02:07.685 mempool/dpaa2: not in enabled drivers build config 00:02:07.686 mempool/octeontx: not in enabled drivers build config 00:02:07.686 mempool/stack: not in enabled drivers build config 00:02:07.686 dma/cnxk: not in enabled drivers build config 00:02:07.686 dma/dpaa: not in enabled drivers build config 00:02:07.686 dma/dpaa2: not in enabled drivers build config 00:02:07.686 dma/hisilicon: not in enabled drivers build config 00:02:07.686 dma/idxd: not in enabled drivers build config 00:02:07.686 dma/ioat: not in enabled drivers build config 00:02:07.686 dma/skeleton: not in enabled drivers build config 00:02:07.686 net/af_packet: not in enabled drivers build config 00:02:07.686 net/af_xdp: not in enabled drivers build config 00:02:07.686 net/ark: not in enabled drivers build config 00:02:07.686 net/atlantic: not in enabled drivers build config 00:02:07.686 net/avp: not in enabled drivers build config 00:02:07.686 net/axgbe: not in enabled drivers build config 00:02:07.686 net/bnx2x: not in enabled drivers build config 00:02:07.686 net/bnxt: not in enabled drivers build config 00:02:07.686 net/bonding: not in enabled drivers build config 00:02:07.686 net/cnxk: not in enabled drivers build config 00:02:07.686 net/cpfl: not in enabled drivers build config 00:02:07.686 net/cxgbe: not in enabled drivers build config 00:02:07.686 net/dpaa: not in enabled drivers build config 00:02:07.686 net/dpaa2: not in enabled drivers build config 00:02:07.686 net/e1000: not in enabled drivers build config 00:02:07.686 net/ena: not in enabled drivers build config 00:02:07.686 net/enetc: not in enabled drivers build config 00:02:07.686 net/enetfec: not in enabled drivers build config 00:02:07.686 net/enic: not in enabled drivers build config 00:02:07.686 net/failsafe: not in enabled drivers build config 00:02:07.686 net/fm10k: not in enabled drivers build config 00:02:07.686 net/gve: not in enabled drivers build config 00:02:07.686 net/hinic: not in enabled drivers build config 00:02:07.686 net/hns3: not in enabled drivers build config 00:02:07.686 net/i40e: not in enabled drivers build config 00:02:07.686 net/iavf: not in enabled drivers build config 00:02:07.686 net/ice: not in enabled drivers build config 00:02:07.686 net/idpf: not in enabled drivers build config 00:02:07.686 net/igc: not in enabled drivers build config 00:02:07.686 net/ionic: not in enabled drivers build config 00:02:07.686 net/ipn3ke: not in enabled drivers build config 00:02:07.686 net/ixgbe: not in enabled drivers build config 00:02:07.686 net/mana: not in enabled drivers build config 00:02:07.686 net/memif: not in enabled drivers build config 00:02:07.686 net/mlx4: not in enabled drivers build config 00:02:07.686 net/mlx5: not in enabled drivers build config 00:02:07.686 net/mvneta: not in enabled drivers build config 00:02:07.686 net/mvpp2: not in enabled drivers build config 00:02:07.686 net/netvsc: not in enabled drivers build config 00:02:07.686 net/nfb: not in enabled drivers build config 00:02:07.686 net/nfp: not in enabled drivers build config 00:02:07.686 net/ngbe: not in enabled drivers build config 00:02:07.686 net/null: not in enabled drivers build config 00:02:07.686 net/octeontx: not in enabled drivers build config 00:02:07.686 net/octeon_ep: not in enabled drivers build config 00:02:07.686 net/pcap: not in enabled drivers build config 00:02:07.686 net/pfe: not in enabled drivers build config 00:02:07.686 net/qede: not in enabled drivers build config 00:02:07.686 net/ring: not in enabled drivers build config 00:02:07.686 net/sfc: not in enabled drivers build config 00:02:07.686 net/softnic: not in enabled drivers build config 00:02:07.686 net/tap: not in enabled drivers build config 00:02:07.686 net/thunderx: not in enabled drivers build config 00:02:07.686 net/txgbe: not in enabled drivers build config 00:02:07.686 net/vdev_netvsc: not in enabled drivers build config 00:02:07.686 net/vhost: not in enabled drivers build config 00:02:07.686 net/virtio: not in enabled drivers build config 00:02:07.686 net/vmxnet3: not in enabled drivers build config 00:02:07.686 raw/*: missing internal dependency, "rawdev" 00:02:07.686 crypto/armv8: not in enabled drivers build config 00:02:07.686 crypto/bcmfs: not in enabled drivers build config 00:02:07.686 crypto/caam_jr: not in enabled drivers build config 00:02:07.686 crypto/ccp: not in enabled drivers build config 00:02:07.686 crypto/cnxk: not in enabled drivers build config 00:02:07.686 crypto/dpaa_sec: not in enabled drivers build config 00:02:07.686 crypto/dpaa2_sec: not in enabled drivers build config 00:02:07.686 crypto/ipsec_mb: not in enabled drivers build config 00:02:07.686 crypto/mlx5: not in enabled drivers build config 00:02:07.686 crypto/mvsam: not in enabled drivers build config 00:02:07.686 crypto/nitrox: not in enabled drivers build config 00:02:07.686 crypto/null: not in enabled drivers build config 00:02:07.686 crypto/octeontx: not in enabled drivers build config 00:02:07.686 crypto/openssl: not in enabled drivers build config 00:02:07.686 crypto/scheduler: not in enabled drivers build config 00:02:07.686 crypto/uadk: not in enabled drivers build config 00:02:07.686 crypto/virtio: not in enabled drivers build config 00:02:07.686 compress/isal: not in enabled drivers build config 00:02:07.686 compress/mlx5: not in enabled drivers build config 00:02:07.686 compress/nitrox: not in enabled drivers build config 00:02:07.686 compress/octeontx: not in enabled drivers build config 00:02:07.686 compress/zlib: not in enabled drivers build config 00:02:07.686 regex/*: missing internal dependency, "regexdev" 00:02:07.686 ml/*: missing internal dependency, "mldev" 00:02:07.686 vdpa/ifc: not in enabled drivers build config 00:02:07.686 vdpa/mlx5: not in enabled drivers build config 00:02:07.686 vdpa/nfp: not in enabled drivers build config 00:02:07.686 vdpa/sfc: not in enabled drivers build config 00:02:07.686 event/*: missing internal dependency, "eventdev" 00:02:07.686 baseband/*: missing internal dependency, "bbdev" 00:02:07.686 gpu/*: missing internal dependency, "gpudev" 00:02:07.686 00:02:07.686 00:02:07.686 Build targets in project: 85 00:02:07.686 00:02:07.686 DPDK 24.03.0 00:02:07.686 00:02:07.686 User defined options 00:02:07.686 buildtype : debug 00:02:07.686 default_library : shared 00:02:07.686 libdir : lib 00:02:07.686 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:07.686 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:07.686 c_link_args : 00:02:07.686 cpu_instruction_set: native 00:02:07.686 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:07.686 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:07.686 enable_docs : false 00:02:07.686 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:07.686 enable_kmods : false 00:02:07.686 max_lcores : 128 00:02:07.686 tests : false 00:02:07.686 00:02:07.686 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.686 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:07.686 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.686 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.686 [3/268] Linking static target lib/librte_kvargs.a 00:02:07.686 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:07.686 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.686 [6/268] Linking static target lib/librte_log.a 00:02:07.686 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.686 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:07.686 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.686 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:07.686 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:07.686 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:07.686 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:07.686 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:07.945 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:07.945 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.945 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.945 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:07.945 [19/268] Linking target lib/librte_log.so.24.1 00:02:07.945 [20/268] Linking static target lib/librte_telemetry.a 00:02:08.203 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:08.459 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.459 [23/268] Linking target lib/librte_kvargs.so.24.1 00:02:08.459 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.459 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.459 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:08.716 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.716 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.716 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.716 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.716 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.716 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:08.973 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.973 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.973 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:09.231 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:09.231 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.488 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.488 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.488 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:09.488 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:09.488 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.746 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.746 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.746 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.746 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.746 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.746 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.746 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:10.005 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.263 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:10.263 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.522 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.522 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:10.522 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:10.781 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.781 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:10.781 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.781 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.781 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.040 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.040 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.040 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.298 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.298 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.298 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.558 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.558 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.558 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.558 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.817 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.817 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:11.817 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:11.817 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:11.817 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.817 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.074 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.074 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.074 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.074 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.074 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.638 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:12.638 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:12.638 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.638 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:12.638 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:12.638 [87/268] Linking static target lib/librte_ring.a 00:02:12.896 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:12.896 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:12.896 [90/268] Linking static target lib/librte_eal.a 00:02:12.896 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:12.896 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:12.896 [93/268] Linking static target lib/librte_rcu.a 00:02:12.896 [94/268] Linking static target lib/librte_mempool.a 00:02:13.153 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.153 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.153 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.153 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.411 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.411 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.411 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.411 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.411 [103/268] Linking static target lib/librte_mbuf.a 00:02:13.669 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.669 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.669 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.926 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.926 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.926 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:13.926 [110/268] Linking static target lib/librte_net.a 00:02:13.926 [111/268] Linking static target lib/librte_meter.a 00:02:14.183 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.440 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.440 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.440 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.440 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.440 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.696 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.696 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.953 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.211 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.211 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.469 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.469 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.726 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.726 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.726 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.726 [128/268] Linking static target lib/librte_pci.a 00:02:15.726 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.726 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.726 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:15.984 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:15.984 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:15.984 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:15.984 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:15.984 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.984 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.984 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.242 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.242 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.242 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.242 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:16.242 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.242 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.242 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.242 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.500 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.500 [148/268] Linking static target lib/librte_ethdev.a 00:02:16.500 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.500 [150/268] Linking static target lib/librte_cmdline.a 00:02:16.759 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.759 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:16.759 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.759 [154/268] Linking static target lib/librte_timer.a 00:02:17.017 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:17.017 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:17.017 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.586 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.586 [159/268] Linking static target lib/librte_compressdev.a 00:02:17.586 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.586 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:17.586 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.586 [163/268] Linking static target lib/librte_hash.a 00:02:17.586 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:17.586 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.958 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:17.958 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:18.232 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.232 [169/268] Linking static target lib/librte_dmadev.a 00:02:18.232 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:18.232 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.232 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:18.232 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:18.490 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:18.490 [175/268] Linking static target lib/librte_cryptodev.a 00:02:18.490 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.491 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:18.749 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.007 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.007 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:19.007 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.007 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:19.007 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:19.007 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.574 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:19.574 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:19.574 [187/268] Linking static target lib/librte_reorder.a 00:02:19.574 [188/268] Linking static target lib/librte_power.a 00:02:19.574 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:19.574 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:19.832 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.091 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:20.091 [193/268] Linking static target lib/librte_security.a 00:02:20.091 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.091 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:20.657 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:20.657 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.657 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:20.916 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.916 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.916 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:20.916 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.175 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:21.434 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.434 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:21.434 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:21.434 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:21.693 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:21.693 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:21.693 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.693 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:21.693 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:21.952 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:21.952 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.952 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.952 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:21.952 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:21.952 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.952 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.952 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:21.952 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:21.952 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:22.211 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.211 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.211 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.211 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.211 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:22.469 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.729 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:22.729 [230/268] Linking static target lib/librte_vhost.a 00:02:23.666 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.666 [232/268] Linking target lib/librte_eal.so.24.1 00:02:23.925 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:23.925 [234/268] Linking target lib/librte_timer.so.24.1 00:02:23.925 [235/268] Linking target lib/librte_ring.so.24.1 00:02:23.925 [236/268] Linking target lib/librte_meter.so.24.1 00:02:23.925 [237/268] Linking target lib/librte_pci.so.24.1 00:02:23.925 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:23.925 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:23.925 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:23.925 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:23.925 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:23.925 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:23.925 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:23.925 [245/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.925 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:23.925 [247/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.925 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:23.925 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:24.183 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.183 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.184 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:24.184 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:24.442 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:24.442 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:24.442 [256/268] Linking target lib/librte_net.so.24.1 00:02:24.442 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:24.442 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:24.442 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:24.442 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:24.700 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:24.700 [262/268] Linking target lib/librte_hash.so.24.1 00:02:24.700 [263/268] Linking target lib/librte_security.so.24.1 00:02:24.700 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:24.700 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:24.700 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:24.700 [267/268] Linking target lib/librte_power.so.24.1 00:02:24.700 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:24.959 INFO: autodetecting backend as ninja 00:02:24.959 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:51.517 CC lib/log/log_flags.o 00:02:51.517 CC lib/log/log.o 00:02:51.517 CC lib/log/log_deprecated.o 00:02:51.517 CC lib/ut_mock/mock.o 00:02:51.517 CC lib/ut/ut.o 00:02:51.517 LIB libspdk_log.a 00:02:51.517 LIB libspdk_ut_mock.a 00:02:51.517 LIB libspdk_ut.a 00:02:51.517 SO libspdk_ut_mock.so.6.0 00:02:51.517 SO libspdk_log.so.7.1 00:02:51.517 SO libspdk_ut.so.2.0 00:02:51.517 SYMLINK libspdk_ut_mock.so 00:02:51.517 SYMLINK libspdk_log.so 00:02:51.517 SYMLINK libspdk_ut.so 00:02:51.517 CC lib/util/base64.o 00:02:51.517 CC lib/util/bit_array.o 00:02:51.517 CC lib/util/cpuset.o 00:02:51.517 CC lib/util/crc16.o 00:02:51.517 CC lib/util/crc32.o 00:02:51.517 CC lib/ioat/ioat.o 00:02:51.517 CC lib/dma/dma.o 00:02:51.517 CC lib/util/crc32c.o 00:02:51.517 CXX lib/trace_parser/trace.o 00:02:51.517 CC lib/vfio_user/host/vfio_user_pci.o 00:02:51.517 CC lib/util/crc32_ieee.o 00:02:51.517 CC lib/util/crc64.o 00:02:51.518 CC lib/util/dif.o 00:02:51.518 LIB libspdk_dma.a 00:02:51.518 CC lib/util/fd.o 00:02:51.518 CC lib/util/fd_group.o 00:02:51.518 CC lib/util/file.o 00:02:51.518 SO libspdk_dma.so.5.0 00:02:51.518 CC lib/vfio_user/host/vfio_user.o 00:02:51.518 CC lib/util/hexlify.o 00:02:51.518 SYMLINK libspdk_dma.so 00:02:51.518 CC lib/util/iov.o 00:02:51.518 CC lib/util/math.o 00:02:51.518 LIB libspdk_ioat.a 00:02:51.518 CC lib/util/net.o 00:02:51.518 SO libspdk_ioat.so.7.0 00:02:51.518 CC lib/util/pipe.o 00:02:51.518 SYMLINK libspdk_ioat.so 00:02:51.518 CC lib/util/strerror_tls.o 00:02:51.518 CC lib/util/string.o 00:02:51.518 LIB libspdk_vfio_user.a 00:02:51.518 CC lib/util/uuid.o 00:02:51.518 CC lib/util/xor.o 00:02:51.518 SO libspdk_vfio_user.so.5.0 00:02:51.518 CC lib/util/zipf.o 00:02:51.518 CC lib/util/md5.o 00:02:51.518 SYMLINK libspdk_vfio_user.so 00:02:51.518 LIB libspdk_util.a 00:02:51.518 SO libspdk_util.so.10.1 00:02:51.518 SYMLINK libspdk_util.so 00:02:51.518 LIB libspdk_trace_parser.a 00:02:51.518 SO libspdk_trace_parser.so.6.0 00:02:51.518 CC lib/env_dpdk/env.o 00:02:51.518 CC lib/env_dpdk/memory.o 00:02:51.518 CC lib/conf/conf.o 00:02:51.518 CC lib/json/json_parse.o 00:02:51.518 CC lib/env_dpdk/pci.o 00:02:51.518 CC lib/json/json_util.o 00:02:51.518 CC lib/vmd/vmd.o 00:02:51.518 CC lib/rdma_utils/rdma_utils.o 00:02:51.518 CC lib/idxd/idxd.o 00:02:51.518 SYMLINK libspdk_trace_parser.so 00:02:51.518 CC lib/idxd/idxd_user.o 00:02:51.518 CC lib/idxd/idxd_kernel.o 00:02:51.518 CC lib/json/json_write.o 00:02:51.518 LIB libspdk_conf.a 00:02:51.518 SO libspdk_conf.so.6.0 00:02:51.518 LIB libspdk_rdma_utils.a 00:02:51.518 CC lib/vmd/led.o 00:02:51.518 SO libspdk_rdma_utils.so.1.0 00:02:51.518 SYMLINK libspdk_conf.so 00:02:51.518 CC lib/env_dpdk/init.o 00:02:51.518 SYMLINK libspdk_rdma_utils.so 00:02:51.518 CC lib/env_dpdk/threads.o 00:02:51.518 CC lib/env_dpdk/pci_ioat.o 00:02:51.518 CC lib/env_dpdk/pci_virtio.o 00:02:51.518 CC lib/env_dpdk/pci_vmd.o 00:02:51.518 CC lib/env_dpdk/pci_idxd.o 00:02:51.518 CC lib/env_dpdk/pci_event.o 00:02:51.518 LIB libspdk_json.a 00:02:51.518 CC lib/env_dpdk/sigbus_handler.o 00:02:51.518 SO libspdk_json.so.6.0 00:02:51.518 CC lib/env_dpdk/pci_dpdk.o 00:02:51.518 LIB libspdk_idxd.a 00:02:51.518 SYMLINK libspdk_json.so 00:02:51.518 LIB libspdk_vmd.a 00:02:51.518 SO libspdk_idxd.so.12.1 00:02:51.518 SO libspdk_vmd.so.6.0 00:02:51.518 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:51.518 SYMLINK libspdk_idxd.so 00:02:51.518 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:51.518 SYMLINK libspdk_vmd.so 00:02:51.518 CC lib/jsonrpc/jsonrpc_server.o 00:02:51.518 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:51.518 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:51.518 CC lib/jsonrpc/jsonrpc_client.o 00:02:51.518 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:51.518 CC lib/rdma_provider/common.o 00:02:51.518 LIB libspdk_rdma_provider.a 00:02:51.518 LIB libspdk_jsonrpc.a 00:02:51.518 SO libspdk_rdma_provider.so.7.0 00:02:51.518 SO libspdk_jsonrpc.so.6.0 00:02:51.518 SYMLINK libspdk_rdma_provider.so 00:02:51.518 SYMLINK libspdk_jsonrpc.so 00:02:51.518 LIB libspdk_env_dpdk.a 00:02:51.518 CC lib/rpc/rpc.o 00:02:51.518 SO libspdk_env_dpdk.so.15.1 00:02:51.518 SYMLINK libspdk_env_dpdk.so 00:02:51.518 LIB libspdk_rpc.a 00:02:51.518 SO libspdk_rpc.so.6.0 00:02:51.518 SYMLINK libspdk_rpc.so 00:02:51.518 CC lib/keyring/keyring.o 00:02:51.518 CC lib/keyring/keyring_rpc.o 00:02:51.518 CC lib/trace/trace_flags.o 00:02:51.518 CC lib/trace/trace.o 00:02:51.518 CC lib/trace/trace_rpc.o 00:02:51.518 CC lib/notify/notify.o 00:02:51.518 CC lib/notify/notify_rpc.o 00:02:51.518 LIB libspdk_notify.a 00:02:51.518 SO libspdk_notify.so.6.0 00:02:51.518 LIB libspdk_trace.a 00:02:51.518 LIB libspdk_keyring.a 00:02:51.518 SYMLINK libspdk_notify.so 00:02:51.518 SO libspdk_trace.so.11.0 00:02:51.518 SO libspdk_keyring.so.2.0 00:02:51.518 SYMLINK libspdk_trace.so 00:02:51.518 SYMLINK libspdk_keyring.so 00:02:51.518 CC lib/thread/iobuf.o 00:02:51.518 CC lib/thread/thread.o 00:02:51.518 CC lib/sock/sock.o 00:02:51.518 CC lib/sock/sock_rpc.o 00:02:51.777 LIB libspdk_sock.a 00:02:51.777 SO libspdk_sock.so.10.0 00:02:52.035 SYMLINK libspdk_sock.so 00:02:52.295 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:52.295 CC lib/nvme/nvme_ctrlr.o 00:02:52.295 CC lib/nvme/nvme_fabric.o 00:02:52.295 CC lib/nvme/nvme_ns.o 00:02:52.295 CC lib/nvme/nvme_ns_cmd.o 00:02:52.295 CC lib/nvme/nvme_qpair.o 00:02:52.295 CC lib/nvme/nvme_pcie_common.o 00:02:52.295 CC lib/nvme/nvme_pcie.o 00:02:52.295 CC lib/nvme/nvme.o 00:02:53.231 CC lib/nvme/nvme_quirks.o 00:02:53.231 CC lib/nvme/nvme_transport.o 00:02:53.231 LIB libspdk_thread.a 00:02:53.231 SO libspdk_thread.so.11.0 00:02:53.231 CC lib/nvme/nvme_discovery.o 00:02:53.231 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:53.231 SYMLINK libspdk_thread.so 00:02:53.231 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:53.231 CC lib/nvme/nvme_tcp.o 00:02:53.231 CC lib/nvme/nvme_opal.o 00:02:53.489 CC lib/nvme/nvme_io_msg.o 00:02:53.489 CC lib/accel/accel.o 00:02:53.747 CC lib/accel/accel_rpc.o 00:02:53.747 CC lib/accel/accel_sw.o 00:02:53.747 CC lib/nvme/nvme_poll_group.o 00:02:53.747 CC lib/nvme/nvme_zns.o 00:02:54.005 CC lib/nvme/nvme_stubs.o 00:02:54.005 CC lib/blob/blobstore.o 00:02:54.005 CC lib/init/json_config.o 00:02:54.005 CC lib/init/subsystem.o 00:02:54.264 CC lib/virtio/virtio.o 00:02:54.264 CC lib/virtio/virtio_vhost_user.o 00:02:54.264 CC lib/init/subsystem_rpc.o 00:02:54.522 CC lib/init/rpc.o 00:02:54.522 CC lib/virtio/virtio_vfio_user.o 00:02:54.522 CC lib/virtio/virtio_pci.o 00:02:54.522 CC lib/nvme/nvme_auth.o 00:02:54.522 LIB libspdk_accel.a 00:02:54.522 CC lib/nvme/nvme_cuse.o 00:02:54.522 CC lib/blob/request.o 00:02:54.522 SO libspdk_accel.so.16.0 00:02:54.522 LIB libspdk_init.a 00:02:54.522 SO libspdk_init.so.6.0 00:02:54.779 SYMLINK libspdk_accel.so 00:02:54.779 CC lib/nvme/nvme_rdma.o 00:02:54.779 CC lib/fsdev/fsdev.o 00:02:54.779 SYMLINK libspdk_init.so 00:02:54.779 CC lib/fsdev/fsdev_io.o 00:02:54.779 CC lib/fsdev/fsdev_rpc.o 00:02:54.779 LIB libspdk_virtio.a 00:02:54.779 SO libspdk_virtio.so.7.0 00:02:54.779 CC lib/blob/zeroes.o 00:02:55.037 CC lib/bdev/bdev.o 00:02:55.037 SYMLINK libspdk_virtio.so 00:02:55.037 CC lib/bdev/bdev_rpc.o 00:02:55.037 CC lib/bdev/bdev_zone.o 00:02:55.037 CC lib/event/app.o 00:02:55.295 CC lib/event/reactor.o 00:02:55.295 CC lib/bdev/part.o 00:02:55.295 CC lib/event/log_rpc.o 00:02:55.295 LIB libspdk_fsdev.a 00:02:55.295 SO libspdk_fsdev.so.2.0 00:02:55.554 SYMLINK libspdk_fsdev.so 00:02:55.554 CC lib/event/app_rpc.o 00:02:55.554 CC lib/blob/blob_bs_dev.o 00:02:55.554 CC lib/bdev/scsi_nvme.o 00:02:55.554 CC lib/event/scheduler_static.o 00:02:55.554 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:55.812 LIB libspdk_event.a 00:02:55.812 SO libspdk_event.so.14.0 00:02:55.812 SYMLINK libspdk_event.so 00:02:56.070 LIB libspdk_nvme.a 00:02:56.329 LIB libspdk_fuse_dispatcher.a 00:02:56.329 SO libspdk_nvme.so.15.0 00:02:56.329 SO libspdk_fuse_dispatcher.so.1.0 00:02:56.329 SYMLINK libspdk_fuse_dispatcher.so 00:02:56.587 SYMLINK libspdk_nvme.so 00:02:56.846 LIB libspdk_blob.a 00:02:57.104 SO libspdk_blob.so.12.0 00:02:57.104 SYMLINK libspdk_blob.so 00:02:57.364 CC lib/lvol/lvol.o 00:02:57.364 CC lib/blobfs/blobfs.o 00:02:57.364 CC lib/blobfs/tree.o 00:02:57.622 LIB libspdk_bdev.a 00:02:57.880 SO libspdk_bdev.so.17.0 00:02:57.880 SYMLINK libspdk_bdev.so 00:02:58.145 CC lib/nbd/nbd.o 00:02:58.145 CC lib/nvmf/ctrlr.o 00:02:58.145 CC lib/nbd/nbd_rpc.o 00:02:58.145 CC lib/nvmf/ctrlr_discovery.o 00:02:58.145 CC lib/nvmf/ctrlr_bdev.o 00:02:58.145 CC lib/ublk/ublk.o 00:02:58.145 CC lib/ftl/ftl_core.o 00:02:58.145 CC lib/scsi/dev.o 00:02:58.145 LIB libspdk_blobfs.a 00:02:58.145 SO libspdk_blobfs.so.11.0 00:02:58.145 LIB libspdk_lvol.a 00:02:58.411 CC lib/ftl/ftl_init.o 00:02:58.411 SO libspdk_lvol.so.11.0 00:02:58.411 SYMLINK libspdk_blobfs.so 00:02:58.411 CC lib/nvmf/subsystem.o 00:02:58.411 SYMLINK libspdk_lvol.so 00:02:58.411 CC lib/nvmf/nvmf.o 00:02:58.411 CC lib/scsi/lun.o 00:02:58.411 CC lib/nvmf/nvmf_rpc.o 00:02:58.411 CC lib/ftl/ftl_layout.o 00:02:58.411 LIB libspdk_nbd.a 00:02:58.669 SO libspdk_nbd.so.7.0 00:02:58.669 CC lib/ftl/ftl_debug.o 00:02:58.669 SYMLINK libspdk_nbd.so 00:02:58.669 CC lib/ftl/ftl_io.o 00:02:58.669 CC lib/scsi/port.o 00:02:58.669 CC lib/ublk/ublk_rpc.o 00:02:58.669 CC lib/ftl/ftl_sb.o 00:02:58.927 CC lib/scsi/scsi.o 00:02:58.927 CC lib/nvmf/transport.o 00:02:58.927 CC lib/scsi/scsi_bdev.o 00:02:58.927 CC lib/scsi/scsi_pr.o 00:02:58.927 LIB libspdk_ublk.a 00:02:58.927 SO libspdk_ublk.so.3.0 00:02:58.927 CC lib/ftl/ftl_l2p.o 00:02:58.927 CC lib/ftl/ftl_l2p_flat.o 00:02:59.185 SYMLINK libspdk_ublk.so 00:02:59.186 CC lib/scsi/scsi_rpc.o 00:02:59.186 CC lib/scsi/task.o 00:02:59.186 CC lib/ftl/ftl_nv_cache.o 00:02:59.186 CC lib/ftl/ftl_band.o 00:02:59.186 CC lib/nvmf/tcp.o 00:02:59.186 CC lib/nvmf/stubs.o 00:02:59.444 CC lib/nvmf/mdns_server.o 00:02:59.444 CC lib/ftl/ftl_band_ops.o 00:02:59.444 LIB libspdk_scsi.a 00:02:59.444 SO libspdk_scsi.so.9.0 00:02:59.703 CC lib/ftl/ftl_writer.o 00:02:59.703 SYMLINK libspdk_scsi.so 00:02:59.703 CC lib/ftl/ftl_rq.o 00:02:59.703 CC lib/ftl/ftl_reloc.o 00:02:59.703 CC lib/ftl/ftl_l2p_cache.o 00:02:59.703 CC lib/nvmf/rdma.o 00:02:59.703 CC lib/nvmf/auth.o 00:02:59.703 CC lib/ftl/ftl_p2l.o 00:02:59.703 CC lib/ftl/ftl_p2l_log.o 00:02:59.962 CC lib/ftl/mngt/ftl_mngt.o 00:02:59.962 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:59.962 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:00.219 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:00.219 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:00.219 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:00.219 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:00.219 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:00.219 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:00.478 CC lib/iscsi/conn.o 00:03:00.478 CC lib/vhost/vhost.o 00:03:00.478 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:00.478 CC lib/iscsi/init_grp.o 00:03:00.478 CC lib/iscsi/iscsi.o 00:03:00.478 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:00.478 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:00.736 CC lib/iscsi/param.o 00:03:00.736 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:00.736 CC lib/vhost/vhost_rpc.o 00:03:00.736 CC lib/iscsi/portal_grp.o 00:03:00.994 CC lib/iscsi/tgt_node.o 00:03:00.994 CC lib/iscsi/iscsi_subsystem.o 00:03:00.994 CC lib/iscsi/iscsi_rpc.o 00:03:00.994 CC lib/ftl/utils/ftl_conf.o 00:03:00.994 CC lib/ftl/utils/ftl_md.o 00:03:00.994 CC lib/ftl/utils/ftl_mempool.o 00:03:01.252 CC lib/vhost/vhost_scsi.o 00:03:01.252 CC lib/vhost/vhost_blk.o 00:03:01.252 CC lib/vhost/rte_vhost_user.o 00:03:01.510 CC lib/iscsi/task.o 00:03:01.510 CC lib/ftl/utils/ftl_bitmap.o 00:03:01.510 CC lib/ftl/utils/ftl_property.o 00:03:01.510 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:01.510 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:01.510 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:01.510 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:01.770 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:01.770 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:01.770 LIB libspdk_nvmf.a 00:03:01.770 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:01.770 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:01.770 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:01.770 LIB libspdk_iscsi.a 00:03:02.029 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:02.029 SO libspdk_nvmf.so.20.0 00:03:02.029 SO libspdk_iscsi.so.8.0 00:03:02.029 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:02.029 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:02.029 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:02.029 CC lib/ftl/base/ftl_base_dev.o 00:03:02.029 SYMLINK libspdk_iscsi.so 00:03:02.029 CC lib/ftl/base/ftl_base_bdev.o 00:03:02.029 SYMLINK libspdk_nvmf.so 00:03:02.029 CC lib/ftl/ftl_trace.o 00:03:02.288 LIB libspdk_ftl.a 00:03:02.548 LIB libspdk_vhost.a 00:03:02.548 SO libspdk_vhost.so.8.0 00:03:02.548 SO libspdk_ftl.so.9.0 00:03:02.807 SYMLINK libspdk_vhost.so 00:03:02.807 SYMLINK libspdk_ftl.so 00:03:03.373 CC module/env_dpdk/env_dpdk_rpc.o 00:03:03.373 CC module/accel/iaa/accel_iaa.o 00:03:03.373 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:03.373 CC module/accel/error/accel_error.o 00:03:03.373 CC module/accel/dsa/accel_dsa.o 00:03:03.373 CC module/sock/posix/posix.o 00:03:03.373 CC module/blob/bdev/blob_bdev.o 00:03:03.373 CC module/keyring/file/keyring.o 00:03:03.373 CC module/fsdev/aio/fsdev_aio.o 00:03:03.373 CC module/accel/ioat/accel_ioat.o 00:03:03.373 LIB libspdk_env_dpdk_rpc.a 00:03:03.373 SO libspdk_env_dpdk_rpc.so.6.0 00:03:03.373 SYMLINK libspdk_env_dpdk_rpc.so 00:03:03.373 CC module/keyring/file/keyring_rpc.o 00:03:03.631 CC module/accel/ioat/accel_ioat_rpc.o 00:03:03.631 CC module/accel/iaa/accel_iaa_rpc.o 00:03:03.631 CC module/accel/error/accel_error_rpc.o 00:03:03.631 LIB libspdk_scheduler_dynamic.a 00:03:03.631 SO libspdk_scheduler_dynamic.so.4.0 00:03:03.631 LIB libspdk_blob_bdev.a 00:03:03.631 SYMLINK libspdk_scheduler_dynamic.so 00:03:03.631 LIB libspdk_keyring_file.a 00:03:03.631 LIB libspdk_accel_ioat.a 00:03:03.631 CC module/accel/dsa/accel_dsa_rpc.o 00:03:03.631 SO libspdk_blob_bdev.so.12.0 00:03:03.631 SO libspdk_keyring_file.so.2.0 00:03:03.631 LIB libspdk_accel_error.a 00:03:03.631 SO libspdk_accel_ioat.so.6.0 00:03:03.631 LIB libspdk_accel_iaa.a 00:03:03.631 CC module/keyring/linux/keyring.o 00:03:03.631 SO libspdk_accel_error.so.2.0 00:03:03.631 SYMLINK libspdk_blob_bdev.so 00:03:03.631 CC module/keyring/linux/keyring_rpc.o 00:03:03.888 SO libspdk_accel_iaa.so.3.0 00:03:03.888 SYMLINK libspdk_accel_ioat.so 00:03:03.888 SYMLINK libspdk_keyring_file.so 00:03:03.888 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:03.888 LIB libspdk_accel_dsa.a 00:03:03.888 SYMLINK libspdk_accel_error.so 00:03:03.888 SYMLINK libspdk_accel_iaa.so 00:03:03.888 CC module/fsdev/aio/linux_aio_mgr.o 00:03:03.888 SO libspdk_accel_dsa.so.5.0 00:03:03.888 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:03.888 SYMLINK libspdk_accel_dsa.so 00:03:03.888 LIB libspdk_keyring_linux.a 00:03:03.888 SO libspdk_keyring_linux.so.1.0 00:03:03.888 CC module/scheduler/gscheduler/gscheduler.o 00:03:04.146 SYMLINK libspdk_keyring_linux.so 00:03:04.146 LIB libspdk_fsdev_aio.a 00:03:04.146 LIB libspdk_scheduler_dpdk_governor.a 00:03:04.146 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:04.146 SO libspdk_fsdev_aio.so.1.0 00:03:04.146 LIB libspdk_sock_posix.a 00:03:04.146 CC module/bdev/delay/vbdev_delay.o 00:03:04.146 CC module/bdev/gpt/gpt.o 00:03:04.146 CC module/bdev/error/vbdev_error.o 00:03:04.146 SO libspdk_sock_posix.so.6.0 00:03:04.146 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:04.146 LIB libspdk_scheduler_gscheduler.a 00:03:04.146 SYMLINK libspdk_fsdev_aio.so 00:03:04.146 CC module/blobfs/bdev/blobfs_bdev.o 00:03:04.146 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:04.146 CC module/bdev/lvol/vbdev_lvol.o 00:03:04.146 SO libspdk_scheduler_gscheduler.so.4.0 00:03:04.146 CC module/bdev/malloc/bdev_malloc.o 00:03:04.146 SYMLINK libspdk_sock_posix.so 00:03:04.146 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:04.146 SYMLINK libspdk_scheduler_gscheduler.so 00:03:04.146 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:04.405 CC module/bdev/gpt/vbdev_gpt.o 00:03:04.405 CC module/bdev/null/bdev_null.o 00:03:04.405 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:04.405 CC module/bdev/error/vbdev_error_rpc.o 00:03:04.405 LIB libspdk_blobfs_bdev.a 00:03:04.405 CC module/bdev/null/bdev_null_rpc.o 00:03:04.405 SO libspdk_blobfs_bdev.so.6.0 00:03:04.405 LIB libspdk_bdev_delay.a 00:03:04.666 SO libspdk_bdev_delay.so.6.0 00:03:04.666 SYMLINK libspdk_blobfs_bdev.so 00:03:04.666 CC module/bdev/nvme/bdev_nvme.o 00:03:04.666 LIB libspdk_bdev_error.a 00:03:04.666 SYMLINK libspdk_bdev_delay.so 00:03:04.666 LIB libspdk_bdev_malloc.a 00:03:04.666 LIB libspdk_bdev_gpt.a 00:03:04.666 SO libspdk_bdev_error.so.6.0 00:03:04.666 SO libspdk_bdev_malloc.so.6.0 00:03:04.666 SO libspdk_bdev_gpt.so.6.0 00:03:04.666 LIB libspdk_bdev_null.a 00:03:04.666 SO libspdk_bdev_null.so.6.0 00:03:04.666 SYMLINK libspdk_bdev_error.so 00:03:04.666 SYMLINK libspdk_bdev_malloc.so 00:03:04.666 SYMLINK libspdk_bdev_gpt.so 00:03:04.666 CC module/bdev/passthru/vbdev_passthru.o 00:03:04.666 SYMLINK libspdk_bdev_null.so 00:03:04.666 LIB libspdk_bdev_lvol.a 00:03:04.666 CC module/bdev/raid/bdev_raid.o 00:03:04.666 SO libspdk_bdev_lvol.so.6.0 00:03:04.970 CC module/bdev/split/vbdev_split.o 00:03:04.970 SYMLINK libspdk_bdev_lvol.so 00:03:04.970 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:04.970 CC module/bdev/ftl/bdev_ftl.o 00:03:04.970 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:04.970 CC module/bdev/aio/bdev_aio.o 00:03:04.970 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:04.970 CC module/bdev/iscsi/bdev_iscsi.o 00:03:04.970 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:04.970 CC module/bdev/split/vbdev_split_rpc.o 00:03:05.228 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:05.228 LIB libspdk_bdev_passthru.a 00:03:05.228 LIB libspdk_bdev_split.a 00:03:05.228 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:05.228 SO libspdk_bdev_passthru.so.6.0 00:03:05.228 SO libspdk_bdev_split.so.6.0 00:03:05.228 CC module/bdev/aio/bdev_aio_rpc.o 00:03:05.228 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:05.228 SYMLINK libspdk_bdev_split.so 00:03:05.228 SYMLINK libspdk_bdev_passthru.so 00:03:05.228 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:05.228 CC module/bdev/nvme/nvme_rpc.o 00:03:05.486 LIB libspdk_bdev_ftl.a 00:03:05.486 LIB libspdk_bdev_zone_block.a 00:03:05.486 LIB libspdk_bdev_aio.a 00:03:05.486 SO libspdk_bdev_ftl.so.6.0 00:03:05.486 SO libspdk_bdev_zone_block.so.6.0 00:03:05.486 SO libspdk_bdev_aio.so.6.0 00:03:05.486 LIB libspdk_bdev_iscsi.a 00:03:05.486 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:05.486 CC module/bdev/nvme/bdev_mdns_client.o 00:03:05.486 SYMLINK libspdk_bdev_zone_block.so 00:03:05.486 SYMLINK libspdk_bdev_ftl.so 00:03:05.486 CC module/bdev/raid/bdev_raid_rpc.o 00:03:05.486 SO libspdk_bdev_iscsi.so.6.0 00:03:05.486 SYMLINK libspdk_bdev_aio.so 00:03:05.486 CC module/bdev/nvme/vbdev_opal.o 00:03:05.486 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:05.486 SYMLINK libspdk_bdev_iscsi.so 00:03:05.486 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:05.486 CC module/bdev/raid/bdev_raid_sb.o 00:03:05.745 CC module/bdev/raid/raid0.o 00:03:05.745 LIB libspdk_bdev_virtio.a 00:03:05.745 SO libspdk_bdev_virtio.so.6.0 00:03:05.745 CC module/bdev/raid/raid1.o 00:03:05.745 CC module/bdev/raid/concat.o 00:03:05.745 SYMLINK libspdk_bdev_virtio.so 00:03:06.003 LIB libspdk_bdev_raid.a 00:03:06.262 SO libspdk_bdev_raid.so.6.0 00:03:06.262 SYMLINK libspdk_bdev_raid.so 00:03:07.196 LIB libspdk_bdev_nvme.a 00:03:07.196 SO libspdk_bdev_nvme.so.7.1 00:03:07.196 SYMLINK libspdk_bdev_nvme.so 00:03:07.763 CC module/event/subsystems/iobuf/iobuf.o 00:03:07.763 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:07.763 CC module/event/subsystems/keyring/keyring.o 00:03:07.763 CC module/event/subsystems/scheduler/scheduler.o 00:03:07.763 CC module/event/subsystems/sock/sock.o 00:03:07.763 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:07.763 CC module/event/subsystems/fsdev/fsdev.o 00:03:07.763 CC module/event/subsystems/vmd/vmd.o 00:03:07.763 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:08.022 LIB libspdk_event_fsdev.a 00:03:08.022 LIB libspdk_event_keyring.a 00:03:08.022 LIB libspdk_event_vhost_blk.a 00:03:08.022 LIB libspdk_event_scheduler.a 00:03:08.022 LIB libspdk_event_sock.a 00:03:08.022 SO libspdk_event_keyring.so.1.0 00:03:08.022 SO libspdk_event_fsdev.so.1.0 00:03:08.022 LIB libspdk_event_iobuf.a 00:03:08.022 SO libspdk_event_scheduler.so.4.0 00:03:08.022 SO libspdk_event_vhost_blk.so.3.0 00:03:08.022 LIB libspdk_event_vmd.a 00:03:08.022 SO libspdk_event_sock.so.5.0 00:03:08.022 SO libspdk_event_iobuf.so.3.0 00:03:08.022 SO libspdk_event_vmd.so.6.0 00:03:08.022 SYMLINK libspdk_event_keyring.so 00:03:08.022 SYMLINK libspdk_event_fsdev.so 00:03:08.022 SYMLINK libspdk_event_scheduler.so 00:03:08.022 SYMLINK libspdk_event_sock.so 00:03:08.022 SYMLINK libspdk_event_vhost_blk.so 00:03:08.022 SYMLINK libspdk_event_vmd.so 00:03:08.022 SYMLINK libspdk_event_iobuf.so 00:03:08.303 CC module/event/subsystems/accel/accel.o 00:03:08.562 LIB libspdk_event_accel.a 00:03:08.562 SO libspdk_event_accel.so.6.0 00:03:08.562 SYMLINK libspdk_event_accel.so 00:03:08.820 CC module/event/subsystems/bdev/bdev.o 00:03:09.079 LIB libspdk_event_bdev.a 00:03:09.079 SO libspdk_event_bdev.so.6.0 00:03:09.338 SYMLINK libspdk_event_bdev.so 00:03:09.338 CC module/event/subsystems/ublk/ublk.o 00:03:09.338 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:09.338 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:09.338 CC module/event/subsystems/scsi/scsi.o 00:03:09.338 CC module/event/subsystems/nbd/nbd.o 00:03:09.597 LIB libspdk_event_ublk.a 00:03:09.597 LIB libspdk_event_nbd.a 00:03:09.597 LIB libspdk_event_scsi.a 00:03:09.597 SO libspdk_event_ublk.so.3.0 00:03:09.597 SO libspdk_event_nbd.so.6.0 00:03:09.597 SO libspdk_event_scsi.so.6.0 00:03:09.856 SYMLINK libspdk_event_ublk.so 00:03:09.856 SYMLINK libspdk_event_nbd.so 00:03:09.856 SYMLINK libspdk_event_scsi.so 00:03:09.856 LIB libspdk_event_nvmf.a 00:03:09.856 SO libspdk_event_nvmf.so.6.0 00:03:09.856 SYMLINK libspdk_event_nvmf.so 00:03:10.115 CC module/event/subsystems/iscsi/iscsi.o 00:03:10.115 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:10.115 LIB libspdk_event_vhost_scsi.a 00:03:10.115 LIB libspdk_event_iscsi.a 00:03:10.115 SO libspdk_event_vhost_scsi.so.3.0 00:03:10.373 SO libspdk_event_iscsi.so.6.0 00:03:10.373 SYMLINK libspdk_event_vhost_scsi.so 00:03:10.373 SYMLINK libspdk_event_iscsi.so 00:03:10.373 SO libspdk.so.6.0 00:03:10.373 SYMLINK libspdk.so 00:03:10.632 CC app/spdk_lspci/spdk_lspci.o 00:03:10.632 CXX app/trace/trace.o 00:03:10.632 CC app/trace_record/trace_record.o 00:03:10.890 CC app/iscsi_tgt/iscsi_tgt.o 00:03:10.890 CC app/nvmf_tgt/nvmf_main.o 00:03:10.890 CC app/spdk_tgt/spdk_tgt.o 00:03:10.890 CC test/thread/poller_perf/poller_perf.o 00:03:10.890 CC examples/util/zipf/zipf.o 00:03:10.890 CC test/dma/test_dma/test_dma.o 00:03:10.890 LINK spdk_lspci 00:03:10.890 CC test/app/bdev_svc/bdev_svc.o 00:03:11.150 LINK nvmf_tgt 00:03:11.150 LINK poller_perf 00:03:11.150 LINK iscsi_tgt 00:03:11.150 LINK spdk_trace_record 00:03:11.150 LINK zipf 00:03:11.150 LINK spdk_tgt 00:03:11.150 LINK bdev_svc 00:03:11.150 LINK spdk_trace 00:03:11.150 CC app/spdk_nvme_perf/perf.o 00:03:11.409 TEST_HEADER include/spdk/accel.h 00:03:11.409 TEST_HEADER include/spdk/accel_module.h 00:03:11.409 TEST_HEADER include/spdk/assert.h 00:03:11.409 CC app/spdk_nvme_identify/identify.o 00:03:11.409 CC app/spdk_nvme_discover/discovery_aer.o 00:03:11.409 TEST_HEADER include/spdk/barrier.h 00:03:11.409 TEST_HEADER include/spdk/base64.h 00:03:11.409 TEST_HEADER include/spdk/bdev.h 00:03:11.409 TEST_HEADER include/spdk/bdev_module.h 00:03:11.409 TEST_HEADER include/spdk/bdev_zone.h 00:03:11.409 TEST_HEADER include/spdk/bit_array.h 00:03:11.409 TEST_HEADER include/spdk/bit_pool.h 00:03:11.409 TEST_HEADER include/spdk/blob_bdev.h 00:03:11.409 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:11.409 TEST_HEADER include/spdk/blobfs.h 00:03:11.409 TEST_HEADER include/spdk/blob.h 00:03:11.409 TEST_HEADER include/spdk/conf.h 00:03:11.409 TEST_HEADER include/spdk/config.h 00:03:11.409 TEST_HEADER include/spdk/cpuset.h 00:03:11.409 TEST_HEADER include/spdk/crc16.h 00:03:11.409 TEST_HEADER include/spdk/crc32.h 00:03:11.409 TEST_HEADER include/spdk/crc64.h 00:03:11.409 CC app/spdk_top/spdk_top.o 00:03:11.409 TEST_HEADER include/spdk/dif.h 00:03:11.409 TEST_HEADER include/spdk/dma.h 00:03:11.409 TEST_HEADER include/spdk/endian.h 00:03:11.409 TEST_HEADER include/spdk/env_dpdk.h 00:03:11.409 TEST_HEADER include/spdk/env.h 00:03:11.409 TEST_HEADER include/spdk/event.h 00:03:11.409 TEST_HEADER include/spdk/fd_group.h 00:03:11.409 TEST_HEADER include/spdk/fd.h 00:03:11.409 TEST_HEADER include/spdk/file.h 00:03:11.409 TEST_HEADER include/spdk/fsdev.h 00:03:11.409 TEST_HEADER include/spdk/fsdev_module.h 00:03:11.409 TEST_HEADER include/spdk/ftl.h 00:03:11.409 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:11.409 TEST_HEADER include/spdk/gpt_spec.h 00:03:11.409 TEST_HEADER include/spdk/hexlify.h 00:03:11.409 TEST_HEADER include/spdk/histogram_data.h 00:03:11.409 TEST_HEADER include/spdk/idxd.h 00:03:11.409 TEST_HEADER include/spdk/idxd_spec.h 00:03:11.409 TEST_HEADER include/spdk/init.h 00:03:11.409 CC examples/ioat/perf/perf.o 00:03:11.409 TEST_HEADER include/spdk/ioat.h 00:03:11.409 TEST_HEADER include/spdk/ioat_spec.h 00:03:11.409 TEST_HEADER include/spdk/iscsi_spec.h 00:03:11.409 TEST_HEADER include/spdk/json.h 00:03:11.409 TEST_HEADER include/spdk/jsonrpc.h 00:03:11.409 LINK test_dma 00:03:11.409 TEST_HEADER include/spdk/keyring.h 00:03:11.409 TEST_HEADER include/spdk/keyring_module.h 00:03:11.409 TEST_HEADER include/spdk/likely.h 00:03:11.409 TEST_HEADER include/spdk/log.h 00:03:11.409 TEST_HEADER include/spdk/lvol.h 00:03:11.409 TEST_HEADER include/spdk/md5.h 00:03:11.409 TEST_HEADER include/spdk/memory.h 00:03:11.409 TEST_HEADER include/spdk/mmio.h 00:03:11.409 TEST_HEADER include/spdk/nbd.h 00:03:11.409 TEST_HEADER include/spdk/net.h 00:03:11.409 TEST_HEADER include/spdk/notify.h 00:03:11.409 TEST_HEADER include/spdk/nvme.h 00:03:11.409 TEST_HEADER include/spdk/nvme_intel.h 00:03:11.409 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:11.409 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:11.409 TEST_HEADER include/spdk/nvme_spec.h 00:03:11.409 TEST_HEADER include/spdk/nvme_zns.h 00:03:11.409 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:11.409 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:11.409 TEST_HEADER include/spdk/nvmf.h 00:03:11.409 TEST_HEADER include/spdk/nvmf_spec.h 00:03:11.409 TEST_HEADER include/spdk/nvmf_transport.h 00:03:11.409 TEST_HEADER include/spdk/opal.h 00:03:11.409 CC examples/ioat/verify/verify.o 00:03:11.409 TEST_HEADER include/spdk/opal_spec.h 00:03:11.409 TEST_HEADER include/spdk/pci_ids.h 00:03:11.409 CC examples/vmd/lsvmd/lsvmd.o 00:03:11.409 TEST_HEADER include/spdk/pipe.h 00:03:11.409 TEST_HEADER include/spdk/queue.h 00:03:11.409 TEST_HEADER include/spdk/reduce.h 00:03:11.668 LINK spdk_nvme_discover 00:03:11.668 TEST_HEADER include/spdk/rpc.h 00:03:11.668 TEST_HEADER include/spdk/scheduler.h 00:03:11.668 TEST_HEADER include/spdk/scsi.h 00:03:11.668 TEST_HEADER include/spdk/scsi_spec.h 00:03:11.668 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:11.668 TEST_HEADER include/spdk/sock.h 00:03:11.668 TEST_HEADER include/spdk/stdinc.h 00:03:11.668 TEST_HEADER include/spdk/string.h 00:03:11.668 TEST_HEADER include/spdk/thread.h 00:03:11.668 TEST_HEADER include/spdk/trace.h 00:03:11.668 TEST_HEADER include/spdk/trace_parser.h 00:03:11.668 TEST_HEADER include/spdk/tree.h 00:03:11.668 TEST_HEADER include/spdk/ublk.h 00:03:11.668 TEST_HEADER include/spdk/util.h 00:03:11.668 TEST_HEADER include/spdk/uuid.h 00:03:11.668 TEST_HEADER include/spdk/version.h 00:03:11.668 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:11.668 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:11.668 TEST_HEADER include/spdk/vhost.h 00:03:11.668 TEST_HEADER include/spdk/vmd.h 00:03:11.668 TEST_HEADER include/spdk/xor.h 00:03:11.668 TEST_HEADER include/spdk/zipf.h 00:03:11.668 LINK ioat_perf 00:03:11.668 CXX test/cpp_headers/accel.o 00:03:11.668 CXX test/cpp_headers/accel_module.o 00:03:11.668 LINK lsvmd 00:03:11.668 LINK verify 00:03:11.668 CC examples/vmd/led/led.o 00:03:11.968 CXX test/cpp_headers/assert.o 00:03:11.968 CC app/spdk_dd/spdk_dd.o 00:03:11.968 LINK led 00:03:11.968 LINK nvme_fuzz 00:03:11.968 LINK spdk_nvme_perf 00:03:11.968 CC app/vhost/vhost.o 00:03:11.968 CXX test/cpp_headers/barrier.o 00:03:11.968 CC app/fio/nvme/fio_plugin.o 00:03:11.968 CC examples/idxd/perf/perf.o 00:03:12.227 LINK spdk_nvme_identify 00:03:12.227 CXX test/cpp_headers/base64.o 00:03:12.227 LINK vhost 00:03:12.227 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:12.227 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:12.227 CC app/fio/bdev/fio_plugin.o 00:03:12.227 LINK spdk_top 00:03:12.227 LINK spdk_dd 00:03:12.486 CXX test/cpp_headers/bdev.o 00:03:12.486 LINK idxd_perf 00:03:12.486 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:12.486 CC test/env/mem_callbacks/mem_callbacks.o 00:03:12.486 CXX test/cpp_headers/bdev_module.o 00:03:12.486 LINK spdk_nvme 00:03:12.486 CC test/event/event_perf/event_perf.o 00:03:12.745 CC test/env/vtophys/vtophys.o 00:03:12.745 CC test/nvme/aer/aer.o 00:03:12.745 CXX test/cpp_headers/bdev_zone.o 00:03:12.745 LINK vtophys 00:03:12.745 LINK event_perf 00:03:12.745 LINK spdk_bdev 00:03:12.745 LINK vhost_fuzz 00:03:13.005 CXX test/cpp_headers/bit_array.o 00:03:13.005 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:13.005 LINK aer 00:03:13.005 CC test/env/memory/memory_ut.o 00:03:13.005 CC test/event/reactor/reactor.o 00:03:13.005 CC test/nvme/reset/reset.o 00:03:13.005 CXX test/cpp_headers/bit_pool.o 00:03:13.005 CC test/nvme/sgl/sgl.o 00:03:13.005 LINK env_dpdk_post_init 00:03:13.005 LINK mem_callbacks 00:03:13.264 LINK reactor 00:03:13.264 CC test/env/pci/pci_ut.o 00:03:13.264 CXX test/cpp_headers/blob_bdev.o 00:03:13.264 LINK reset 00:03:13.264 LINK sgl 00:03:13.264 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:13.522 CC test/event/reactor_perf/reactor_perf.o 00:03:13.522 CC examples/thread/thread/thread_ex.o 00:03:13.522 CXX test/cpp_headers/blobfs_bdev.o 00:03:13.522 LINK reactor_perf 00:03:13.522 LINK interrupt_tgt 00:03:13.522 CC test/nvme/e2edp/nvme_dp.o 00:03:13.522 LINK pci_ut 00:03:13.522 CC examples/sock/hello_world/hello_sock.o 00:03:13.780 CXX test/cpp_headers/blobfs.o 00:03:13.780 LINK thread 00:03:14.038 CXX test/cpp_headers/blob.o 00:03:14.038 CC test/event/app_repeat/app_repeat.o 00:03:14.038 CC test/nvme/overhead/overhead.o 00:03:14.038 LINK nvme_dp 00:03:14.038 LINK hello_sock 00:03:14.038 LINK iscsi_fuzz 00:03:14.038 CC test/rpc_client/rpc_client_test.o 00:03:14.038 CC test/event/scheduler/scheduler.o 00:03:14.297 LINK app_repeat 00:03:14.297 CXX test/cpp_headers/conf.o 00:03:14.297 LINK overhead 00:03:14.297 CC test/app/histogram_perf/histogram_perf.o 00:03:14.297 LINK rpc_client_test 00:03:14.297 CC test/accel/dif/dif.o 00:03:14.297 LINK memory_ut 00:03:14.297 LINK scheduler 00:03:14.297 CXX test/cpp_headers/config.o 00:03:14.297 CC test/blobfs/mkfs/mkfs.o 00:03:14.554 CC test/app/jsoncat/jsoncat.o 00:03:14.554 CXX test/cpp_headers/cpuset.o 00:03:14.554 LINK histogram_perf 00:03:14.554 CC test/nvme/err_injection/err_injection.o 00:03:14.554 LINK jsoncat 00:03:14.554 CXX test/cpp_headers/crc16.o 00:03:14.554 CXX test/cpp_headers/crc32.o 00:03:14.554 LINK mkfs 00:03:14.554 CC test/app/stub/stub.o 00:03:14.811 CC test/nvme/startup/startup.o 00:03:14.811 LINK err_injection 00:03:14.811 CC test/lvol/esnap/esnap.o 00:03:14.811 CXX test/cpp_headers/crc64.o 00:03:14.811 CC test/nvme/reserve/reserve.o 00:03:14.811 CXX test/cpp_headers/dif.o 00:03:14.811 LINK stub 00:03:14.811 CC test/nvme/simple_copy/simple_copy.o 00:03:14.811 CXX test/cpp_headers/dma.o 00:03:14.811 LINK startup 00:03:15.086 LINK dif 00:03:15.086 LINK reserve 00:03:15.086 CXX test/cpp_headers/endian.o 00:03:15.086 LINK simple_copy 00:03:15.344 CC examples/accel/perf/accel_perf.o 00:03:15.344 CC examples/nvme/hello_world/hello_world.o 00:03:15.344 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:15.344 CC examples/nvme/reconnect/reconnect.o 00:03:15.344 CC examples/blob/hello_world/hello_blob.o 00:03:15.344 CXX test/cpp_headers/env_dpdk.o 00:03:15.344 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:15.602 CC test/nvme/connect_stress/connect_stress.o 00:03:15.602 LINK hello_world 00:03:15.602 LINK hello_blob 00:03:15.602 CXX test/cpp_headers/env.o 00:03:15.859 LINK connect_stress 00:03:15.859 LINK hello_fsdev 00:03:15.859 LINK accel_perf 00:03:15.859 CXX test/cpp_headers/event.o 00:03:15.859 LINK reconnect 00:03:15.859 LINK nvme_manage 00:03:16.117 CC test/nvme/boot_partition/boot_partition.o 00:03:16.117 CC examples/nvme/arbitration/arbitration.o 00:03:16.117 CXX test/cpp_headers/fd_group.o 00:03:16.117 CC examples/blob/cli/blobcli.o 00:03:16.117 CC test/nvme/compliance/nvme_compliance.o 00:03:16.117 CC examples/nvme/hotplug/hotplug.o 00:03:16.117 LINK boot_partition 00:03:16.117 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.117 CC examples/bdev/bdevperf/bdevperf.o 00:03:16.117 CXX test/cpp_headers/fd.o 00:03:16.374 LINK arbitration 00:03:16.374 LINK hotplug 00:03:16.374 CXX test/cpp_headers/file.o 00:03:16.374 LINK nvme_compliance 00:03:16.632 LINK hello_bdev 00:03:16.632 CC test/nvme/fused_ordering/fused_ordering.o 00:03:16.632 CXX test/cpp_headers/fsdev.o 00:03:16.889 LINK blobcli 00:03:16.889 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:16.889 CXX test/cpp_headers/fsdev_module.o 00:03:16.889 LINK fused_ordering 00:03:17.147 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:17.147 LINK doorbell_aers 00:03:17.147 CXX test/cpp_headers/ftl.o 00:03:17.147 CC test/nvme/fdp/fdp.o 00:03:17.405 CC test/nvme/cuse/cuse.o 00:03:17.405 LINK cmb_copy 00:03:17.405 CC test/bdev/bdevio/bdevio.o 00:03:17.405 LINK bdevperf 00:03:17.405 CC examples/nvme/abort/abort.o 00:03:17.405 CXX test/cpp_headers/fuse_dispatcher.o 00:03:17.663 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:17.663 CXX test/cpp_headers/gpt_spec.o 00:03:17.663 CXX test/cpp_headers/hexlify.o 00:03:17.663 LINK fdp 00:03:17.663 CXX test/cpp_headers/histogram_data.o 00:03:17.663 LINK pmr_persistence 00:03:17.663 CXX test/cpp_headers/idxd.o 00:03:17.922 CXX test/cpp_headers/idxd_spec.o 00:03:17.922 CXX test/cpp_headers/init.o 00:03:17.922 LINK bdevio 00:03:17.922 CXX test/cpp_headers/ioat.o 00:03:17.922 CXX test/cpp_headers/ioat_spec.o 00:03:17.922 LINK abort 00:03:17.922 CXX test/cpp_headers/iscsi_spec.o 00:03:17.922 CXX test/cpp_headers/json.o 00:03:18.181 CXX test/cpp_headers/jsonrpc.o 00:03:18.181 CXX test/cpp_headers/keyring.o 00:03:18.181 CXX test/cpp_headers/keyring_module.o 00:03:18.181 CXX test/cpp_headers/likely.o 00:03:18.181 CXX test/cpp_headers/log.o 00:03:18.181 CXX test/cpp_headers/lvol.o 00:03:18.181 CXX test/cpp_headers/md5.o 00:03:18.439 CXX test/cpp_headers/memory.o 00:03:18.439 CXX test/cpp_headers/mmio.o 00:03:18.439 CXX test/cpp_headers/nbd.o 00:03:18.439 CXX test/cpp_headers/net.o 00:03:18.439 CXX test/cpp_headers/notify.o 00:03:18.439 CXX test/cpp_headers/nvme.o 00:03:18.439 CC examples/nvmf/nvmf/nvmf.o 00:03:18.439 CXX test/cpp_headers/nvme_intel.o 00:03:18.439 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.439 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.439 CXX test/cpp_headers/nvme_spec.o 00:03:18.439 CXX test/cpp_headers/nvme_zns.o 00:03:18.695 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.695 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.695 CXX test/cpp_headers/nvmf.o 00:03:18.695 CXX test/cpp_headers/nvmf_spec.o 00:03:18.695 CXX test/cpp_headers/nvmf_transport.o 00:03:18.695 CXX test/cpp_headers/opal.o 00:03:18.695 LINK nvmf 00:03:18.695 CXX test/cpp_headers/opal_spec.o 00:03:18.952 LINK cuse 00:03:18.952 CXX test/cpp_headers/pci_ids.o 00:03:18.952 CXX test/cpp_headers/pipe.o 00:03:18.952 CXX test/cpp_headers/queue.o 00:03:18.952 CXX test/cpp_headers/reduce.o 00:03:18.952 CXX test/cpp_headers/rpc.o 00:03:18.952 CXX test/cpp_headers/scheduler.o 00:03:18.952 CXX test/cpp_headers/scsi.o 00:03:18.952 CXX test/cpp_headers/scsi_spec.o 00:03:18.952 CXX test/cpp_headers/sock.o 00:03:18.952 CXX test/cpp_headers/stdinc.o 00:03:19.208 CXX test/cpp_headers/string.o 00:03:19.208 CXX test/cpp_headers/thread.o 00:03:19.208 CXX test/cpp_headers/trace.o 00:03:19.208 CXX test/cpp_headers/trace_parser.o 00:03:19.208 CXX test/cpp_headers/tree.o 00:03:19.208 CXX test/cpp_headers/ublk.o 00:03:19.208 CXX test/cpp_headers/util.o 00:03:19.208 CXX test/cpp_headers/uuid.o 00:03:19.208 CXX test/cpp_headers/version.o 00:03:19.465 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.465 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.465 CXX test/cpp_headers/vhost.o 00:03:19.465 CXX test/cpp_headers/vmd.o 00:03:19.465 CXX test/cpp_headers/xor.o 00:03:19.465 CXX test/cpp_headers/zipf.o 00:03:20.394 LINK esnap 00:03:20.650 00:03:20.650 real 1m28.233s 00:03:20.650 user 8m23.111s 00:03:20.650 sys 1m42.331s 00:03:20.650 12:06:51 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:20.650 12:06:51 make -- common/autotest_common.sh@10 -- $ set +x 00:03:20.650 ************************************ 00:03:20.650 END TEST make 00:03:20.650 ************************************ 00:03:20.908 12:06:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:20.908 12:06:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:20.908 12:06:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:20.908 12:06:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.908 12:06:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:20.908 12:06:51 -- pm/common@44 -- $ pid=5305 00:03:20.908 12:06:51 -- pm/common@50 -- $ kill -TERM 5305 00:03:20.908 12:06:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:20.908 12:06:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:20.908 12:06:51 -- pm/common@44 -- $ pid=5307 00:03:20.908 12:06:51 -- pm/common@50 -- $ kill -TERM 5307 00:03:20.908 12:06:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:20.908 12:06:51 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:20.908 12:06:51 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:20.908 12:06:51 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:20.908 12:06:51 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:20.908 12:06:51 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:20.908 12:06:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:20.908 12:06:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:20.908 12:06:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:20.908 12:06:51 -- scripts/common.sh@336 -- # IFS=.-: 00:03:20.908 12:06:51 -- scripts/common.sh@336 -- # read -ra ver1 00:03:20.908 12:06:51 -- scripts/common.sh@337 -- # IFS=.-: 00:03:20.908 12:06:51 -- scripts/common.sh@337 -- # read -ra ver2 00:03:20.908 12:06:51 -- scripts/common.sh@338 -- # local 'op=<' 00:03:20.908 12:06:51 -- scripts/common.sh@340 -- # ver1_l=2 00:03:20.908 12:06:51 -- scripts/common.sh@341 -- # ver2_l=1 00:03:20.908 12:06:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:20.908 12:06:51 -- scripts/common.sh@344 -- # case "$op" in 00:03:20.908 12:06:51 -- scripts/common.sh@345 -- # : 1 00:03:20.908 12:06:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:20.908 12:06:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:20.908 12:06:51 -- scripts/common.sh@365 -- # decimal 1 00:03:20.908 12:06:51 -- scripts/common.sh@353 -- # local d=1 00:03:20.908 12:06:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:20.908 12:06:51 -- scripts/common.sh@355 -- # echo 1 00:03:20.908 12:06:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:20.908 12:06:51 -- scripts/common.sh@366 -- # decimal 2 00:03:20.908 12:06:51 -- scripts/common.sh@353 -- # local d=2 00:03:20.908 12:06:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:20.908 12:06:51 -- scripts/common.sh@355 -- # echo 2 00:03:20.908 12:06:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:20.908 12:06:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:20.908 12:06:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:20.908 12:06:51 -- scripts/common.sh@368 -- # return 0 00:03:20.908 12:06:51 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:20.908 12:06:51 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:20.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.908 --rc genhtml_branch_coverage=1 00:03:20.908 --rc genhtml_function_coverage=1 00:03:20.908 --rc genhtml_legend=1 00:03:20.908 --rc geninfo_all_blocks=1 00:03:20.908 --rc geninfo_unexecuted_blocks=1 00:03:20.908 00:03:20.908 ' 00:03:20.908 12:06:51 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:20.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.908 --rc genhtml_branch_coverage=1 00:03:20.908 --rc genhtml_function_coverage=1 00:03:20.909 --rc genhtml_legend=1 00:03:20.909 --rc geninfo_all_blocks=1 00:03:20.909 --rc geninfo_unexecuted_blocks=1 00:03:20.909 00:03:20.909 ' 00:03:20.909 12:06:51 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:20.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.909 --rc genhtml_branch_coverage=1 00:03:20.909 --rc genhtml_function_coverage=1 00:03:20.909 --rc genhtml_legend=1 00:03:20.909 --rc geninfo_all_blocks=1 00:03:20.909 --rc geninfo_unexecuted_blocks=1 00:03:20.909 00:03:20.909 ' 00:03:20.909 12:06:51 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:20.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.909 --rc genhtml_branch_coverage=1 00:03:20.909 --rc genhtml_function_coverage=1 00:03:20.909 --rc genhtml_legend=1 00:03:20.909 --rc geninfo_all_blocks=1 00:03:20.909 --rc geninfo_unexecuted_blocks=1 00:03:20.909 00:03:20.909 ' 00:03:20.909 12:06:51 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:20.909 12:06:51 -- nvmf/common.sh@7 -- # uname -s 00:03:20.909 12:06:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:20.909 12:06:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:20.909 12:06:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:20.909 12:06:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:20.909 12:06:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:20.909 12:06:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:20.909 12:06:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:20.909 12:06:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:20.909 12:06:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:20.909 12:06:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:20.909 12:06:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:03:20.909 12:06:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:03:20.909 12:06:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:20.909 12:06:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:20.909 12:06:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:20.909 12:06:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:20.909 12:06:51 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:20.909 12:06:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:20.909 12:06:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:20.909 12:06:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:20.909 12:06:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:20.909 12:06:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.909 12:06:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.909 12:06:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.909 12:06:51 -- paths/export.sh@5 -- # export PATH 00:03:20.909 12:06:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.909 12:06:51 -- nvmf/common.sh@51 -- # : 0 00:03:20.909 12:06:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:20.909 12:06:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:20.909 12:06:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:20.909 12:06:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:20.909 12:06:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:20.909 12:06:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:20.909 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:20.909 12:06:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:20.909 12:06:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:20.909 12:06:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:20.909 12:06:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:20.909 12:06:51 -- spdk/autotest.sh@32 -- # uname -s 00:03:20.909 12:06:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:20.909 12:06:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:20.909 12:06:51 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:20.909 12:06:51 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:20.909 12:06:51 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:20.909 12:06:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:21.167 12:06:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:21.167 12:06:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:21.167 12:06:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:21.167 12:06:51 -- spdk/autotest.sh@48 -- # udevadm_pid=56083 00:03:21.167 12:06:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:21.167 12:06:51 -- pm/common@17 -- # local monitor 00:03:21.167 12:06:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.167 12:06:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.167 12:06:51 -- pm/common@25 -- # sleep 1 00:03:21.167 12:06:51 -- pm/common@21 -- # date +%s 00:03:21.167 12:06:51 -- pm/common@21 -- # date +%s 00:03:21.167 12:06:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733400411 00:03:21.167 12:06:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733400411 00:03:21.167 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733400411_collect-cpu-load.pm.log 00:03:21.167 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733400411_collect-vmstat.pm.log 00:03:22.099 12:06:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.099 12:06:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:22.099 12:06:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:22.099 12:06:52 -- common/autotest_common.sh@10 -- # set +x 00:03:22.099 12:06:52 -- spdk/autotest.sh@59 -- # create_test_list 00:03:22.099 12:06:52 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:22.099 12:06:52 -- common/autotest_common.sh@10 -- # set +x 00:03:22.099 12:06:52 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:22.099 12:06:52 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:22.099 12:06:52 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:22.099 12:06:52 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:22.099 12:06:52 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:22.099 12:06:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:22.099 12:06:52 -- common/autotest_common.sh@1457 -- # uname 00:03:22.099 12:06:52 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:22.099 12:06:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:22.099 12:06:52 -- common/autotest_common.sh@1477 -- # uname 00:03:22.099 12:06:52 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:22.099 12:06:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:22.099 12:06:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:22.099 lcov: LCOV version 1.15 00:03:22.099 12:06:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:40.244 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:40.244 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:52.444 12:07:22 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:52.444 12:07:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:52.444 12:07:22 -- common/autotest_common.sh@10 -- # set +x 00:03:52.444 12:07:22 -- spdk/autotest.sh@78 -- # rm -f 00:03:52.444 12:07:22 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:52.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.444 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:52.444 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:52.444 12:07:23 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:52.444 12:07:23 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:52.444 12:07:23 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:52.444 12:07:23 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:52.444 12:07:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.444 12:07:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:52.444 12:07:23 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:52.444 12:07:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.444 12:07:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.444 12:07:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.444 12:07:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:52.444 12:07:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:52.444 12:07:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:52.444 12:07:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.444 12:07:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.444 12:07:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:52.444 12:07:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:52.444 12:07:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:52.444 12:07:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.444 12:07:23 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:52.444 12:07:23 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:52.444 12:07:23 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:52.444 12:07:23 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:52.444 12:07:23 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:52.444 12:07:23 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:52.444 12:07:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.444 12:07:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.444 12:07:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:52.444 12:07:23 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:52.444 12:07:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:52.444 No valid GPT data, bailing 00:03:52.444 12:07:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:52.444 12:07:23 -- scripts/common.sh@394 -- # pt= 00:03:52.444 12:07:23 -- scripts/common.sh@395 -- # return 1 00:03:52.444 12:07:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:52.444 1+0 records in 00:03:52.444 1+0 records out 00:03:52.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519048 s, 202 MB/s 00:03:52.444 12:07:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.444 12:07:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.444 12:07:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:52.444 12:07:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:52.444 12:07:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:52.444 No valid GPT data, bailing 00:03:52.444 12:07:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:52.444 12:07:23 -- scripts/common.sh@394 -- # pt= 00:03:52.444 12:07:23 -- scripts/common.sh@395 -- # return 1 00:03:52.444 12:07:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:52.444 1+0 records in 00:03:52.444 1+0 records out 00:03:52.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542853 s, 193 MB/s 00:03:52.444 12:07:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.444 12:07:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.444 12:07:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:52.444 12:07:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:52.444 12:07:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:52.444 No valid GPT data, bailing 00:03:52.444 12:07:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:52.703 12:07:23 -- scripts/common.sh@394 -- # pt= 00:03:52.703 12:07:23 -- scripts/common.sh@395 -- # return 1 00:03:52.703 12:07:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:52.703 1+0 records in 00:03:52.703 1+0 records out 00:03:52.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520014 s, 202 MB/s 00:03:52.703 12:07:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.703 12:07:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.703 12:07:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:52.703 12:07:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:52.703 12:07:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:52.703 No valid GPT data, bailing 00:03:52.703 12:07:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:52.703 12:07:23 -- scripts/common.sh@394 -- # pt= 00:03:52.703 12:07:23 -- scripts/common.sh@395 -- # return 1 00:03:52.703 12:07:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:52.703 1+0 records in 00:03:52.703 1+0 records out 00:03:52.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493309 s, 213 MB/s 00:03:52.703 12:07:23 -- spdk/autotest.sh@105 -- # sync 00:03:52.703 12:07:23 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:52.703 12:07:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:52.703 12:07:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:55.253 12:07:25 -- spdk/autotest.sh@111 -- # uname -s 00:03:55.253 12:07:25 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:55.253 12:07:25 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:55.253 12:07:25 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:55.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.512 Hugepages 00:03:55.512 node hugesize free / total 00:03:55.512 node0 1048576kB 0 / 0 00:03:55.512 node0 2048kB 0 / 0 00:03:55.512 00:03:55.512 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:55.512 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:55.770 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:55.770 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:55.770 12:07:26 -- spdk/autotest.sh@117 -- # uname -s 00:03:55.770 12:07:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:55.770 12:07:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:55.770 12:07:26 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:56.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.593 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.593 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.593 12:07:27 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:57.966 12:07:28 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:57.966 12:07:28 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:57.966 12:07:28 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:57.966 12:07:28 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:57.966 12:07:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:57.966 12:07:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:57.966 12:07:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:57.966 12:07:28 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:57.966 12:07:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:57.966 12:07:28 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:57.966 12:07:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:57.966 12:07:28 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.225 Waiting for block devices as requested 00:03:58.225 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:58.225 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:58.483 12:07:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:58.483 12:07:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:58.483 12:07:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:58.483 12:07:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:03:58.483 12:07:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:58.483 12:07:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:58.483 12:07:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:58.483 12:07:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:03:58.483 12:07:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:58.483 12:07:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:58.483 12:07:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:58.483 12:07:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:58.483 12:07:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:58.483 12:07:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:58.483 12:07:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:58.483 12:07:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:58.483 12:07:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:58.483 12:07:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:58.483 12:07:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:58.483 12:07:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:58.483 12:07:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:58.483 12:07:29 -- common/autotest_common.sh@1543 -- # continue 00:03:58.483 12:07:29 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:58.483 12:07:29 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:58.483 12:07:29 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:58.483 12:07:29 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:03:58.483 12:07:29 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:58.483 12:07:29 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:58.483 12:07:29 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:58.483 12:07:29 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:58.483 12:07:29 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:58.483 12:07:29 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:58.483 12:07:29 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:58.483 12:07:29 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:58.483 12:07:29 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:58.483 12:07:29 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:03:58.483 12:07:29 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:58.483 12:07:29 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:58.483 12:07:29 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:58.483 12:07:29 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:58.483 12:07:29 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:58.483 12:07:29 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:58.483 12:07:29 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:58.483 12:07:29 -- common/autotest_common.sh@1543 -- # continue 00:03:58.483 12:07:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:58.483 12:07:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:58.483 12:07:29 -- common/autotest_common.sh@10 -- # set +x 00:03:58.483 12:07:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:58.483 12:07:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:58.483 12:07:29 -- common/autotest_common.sh@10 -- # set +x 00:03:58.483 12:07:29 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:59.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.308 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.308 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.308 12:07:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:59.308 12:07:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:59.308 12:07:30 -- common/autotest_common.sh@10 -- # set +x 00:03:59.308 12:07:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:59.308 12:07:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:59.308 12:07:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:59.308 12:07:30 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:59.308 12:07:30 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:59.308 12:07:30 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:59.308 12:07:30 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:59.308 12:07:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:59.308 12:07:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:59.308 12:07:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:59.308 12:07:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:59.308 12:07:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:59.308 12:07:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:59.567 12:07:30 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:59.567 12:07:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:59.567 12:07:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:59.567 12:07:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:59.567 12:07:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:59.567 12:07:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:59.567 12:07:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:59.567 12:07:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:59.567 12:07:30 -- common/autotest_common.sh@1566 -- # device=0x0010 00:03:59.567 12:07:30 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:59.567 12:07:30 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:59.567 12:07:30 -- common/autotest_common.sh@1572 -- # return 0 00:03:59.567 12:07:30 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:59.567 12:07:30 -- common/autotest_common.sh@1580 -- # return 0 00:03:59.567 12:07:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:59.567 12:07:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:59.567 12:07:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:59.567 12:07:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:59.567 12:07:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:59.567 12:07:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.567 12:07:30 -- common/autotest_common.sh@10 -- # set +x 00:03:59.567 12:07:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:59.567 12:07:30 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:59.567 12:07:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.567 12:07:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.567 12:07:30 -- common/autotest_common.sh@10 -- # set +x 00:03:59.567 ************************************ 00:03:59.567 START TEST env 00:03:59.567 ************************************ 00:03:59.567 12:07:30 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:59.567 * Looking for test storage... 00:03:59.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:59.567 12:07:30 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:59.567 12:07:30 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:59.567 12:07:30 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:59.567 12:07:30 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:59.567 12:07:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.567 12:07:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.567 12:07:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.567 12:07:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.567 12:07:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.567 12:07:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.567 12:07:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.567 12:07:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.567 12:07:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.567 12:07:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.567 12:07:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.567 12:07:30 env -- scripts/common.sh@344 -- # case "$op" in 00:03:59.567 12:07:30 env -- scripts/common.sh@345 -- # : 1 00:03:59.567 12:07:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.567 12:07:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.567 12:07:30 env -- scripts/common.sh@365 -- # decimal 1 00:03:59.567 12:07:30 env -- scripts/common.sh@353 -- # local d=1 00:03:59.567 12:07:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.567 12:07:30 env -- scripts/common.sh@355 -- # echo 1 00:03:59.826 12:07:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.826 12:07:30 env -- scripts/common.sh@366 -- # decimal 2 00:03:59.826 12:07:30 env -- scripts/common.sh@353 -- # local d=2 00:03:59.826 12:07:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.826 12:07:30 env -- scripts/common.sh@355 -- # echo 2 00:03:59.826 12:07:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.826 12:07:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.826 12:07:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.826 12:07:30 env -- scripts/common.sh@368 -- # return 0 00:03:59.826 12:07:30 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.826 12:07:30 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:59.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.826 --rc genhtml_branch_coverage=1 00:03:59.826 --rc genhtml_function_coverage=1 00:03:59.826 --rc genhtml_legend=1 00:03:59.826 --rc geninfo_all_blocks=1 00:03:59.826 --rc geninfo_unexecuted_blocks=1 00:03:59.826 00:03:59.826 ' 00:03:59.826 12:07:30 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:59.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.826 --rc genhtml_branch_coverage=1 00:03:59.826 --rc genhtml_function_coverage=1 00:03:59.826 --rc genhtml_legend=1 00:03:59.826 --rc geninfo_all_blocks=1 00:03:59.826 --rc geninfo_unexecuted_blocks=1 00:03:59.826 00:03:59.826 ' 00:03:59.826 12:07:30 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:59.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.826 --rc genhtml_branch_coverage=1 00:03:59.826 --rc genhtml_function_coverage=1 00:03:59.826 --rc genhtml_legend=1 00:03:59.826 --rc geninfo_all_blocks=1 00:03:59.826 --rc geninfo_unexecuted_blocks=1 00:03:59.826 00:03:59.826 ' 00:03:59.826 12:07:30 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:59.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.826 --rc genhtml_branch_coverage=1 00:03:59.826 --rc genhtml_function_coverage=1 00:03:59.826 --rc genhtml_legend=1 00:03:59.826 --rc geninfo_all_blocks=1 00:03:59.826 --rc geninfo_unexecuted_blocks=1 00:03:59.826 00:03:59.826 ' 00:03:59.826 12:07:30 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:59.826 12:07:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.826 12:07:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.826 12:07:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.826 ************************************ 00:03:59.826 START TEST env_memory 00:03:59.826 ************************************ 00:03:59.826 12:07:30 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:59.826 00:03:59.826 00:03:59.826 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.826 http://cunit.sourceforge.net/ 00:03:59.826 00:03:59.826 00:03:59.826 Suite: memory 00:03:59.826 Test: alloc and free memory map ...[2024-12-05 12:07:30.507201] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:59.826 passed 00:03:59.826 Test: mem map translation ...[2024-12-05 12:07:30.538475] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:59.826 [2024-12-05 12:07:30.538531] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:59.826 [2024-12-05 12:07:30.538601] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:59.826 [2024-12-05 12:07:30.538612] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:59.826 passed 00:03:59.826 Test: mem map registration ...[2024-12-05 12:07:30.602420] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:59.826 [2024-12-05 12:07:30.602478] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:59.826 passed 00:03:59.826 Test: mem map adjacent registrations ...passed 00:03:59.826 00:03:59.826 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.826 suites 1 1 n/a 0 0 00:03:59.826 tests 4 4 4 0 0 00:03:59.826 asserts 152 152 152 0 n/a 00:03:59.826 00:03:59.826 Elapsed time = 0.214 seconds 00:03:59.826 00:03:59.826 real 0m0.235s 00:03:59.826 user 0m0.211s 00:03:59.826 sys 0m0.020s 00:03:59.827 12:07:30 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.827 12:07:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:59.827 ************************************ 00:03:59.827 END TEST env_memory 00:03:59.827 ************************************ 00:04:00.086 12:07:30 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:00.086 12:07:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.086 12:07:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.086 12:07:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.086 ************************************ 00:04:00.086 START TEST env_vtophys 00:04:00.086 ************************************ 00:04:00.086 12:07:30 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:00.086 EAL: lib.eal log level changed from notice to debug 00:04:00.086 EAL: Detected lcore 0 as core 0 on socket 0 00:04:00.086 EAL: Detected lcore 1 as core 0 on socket 0 00:04:00.086 EAL: Detected lcore 2 as core 0 on socket 0 00:04:00.086 EAL: Detected lcore 3 as core 0 on socket 0 00:04:00.086 EAL: Detected lcore 4 as core 0 on socket 0 00:04:00.086 EAL: Detected lcore 5 as core 0 on socket 0 00:04:00.086 EAL: Detected lcore 6 as core 0 on socket 0 00:04:00.086 EAL: Detected lcore 7 as core 0 on socket 0 00:04:00.086 EAL: Detected lcore 8 as core 0 on socket 0 00:04:00.086 EAL: Detected lcore 9 as core 0 on socket 0 00:04:00.086 EAL: Maximum logical cores by configuration: 128 00:04:00.086 EAL: Detected CPU lcores: 10 00:04:00.086 EAL: Detected NUMA nodes: 1 00:04:00.086 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:00.086 EAL: Detected shared linkage of DPDK 00:04:00.086 EAL: No shared files mode enabled, IPC will be disabled 00:04:00.086 EAL: Selected IOVA mode 'PA' 00:04:00.086 EAL: Probing VFIO support... 00:04:00.086 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:00.086 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:00.086 EAL: Ask a virtual area of 0x2e000 bytes 00:04:00.086 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:00.086 EAL: Setting up physically contiguous memory... 00:04:00.086 EAL: Setting maximum number of open files to 524288 00:04:00.086 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:00.086 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:00.086 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.086 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:00.086 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.086 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.086 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:00.086 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:00.086 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.086 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:00.086 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.086 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.086 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:00.086 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:00.086 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.086 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:00.086 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.086 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.086 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:00.086 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:00.086 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.086 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:00.086 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.086 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.086 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:00.086 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:00.086 EAL: Hugepages will be freed exactly as allocated. 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: TSC frequency is ~2200000 KHz 00:04:00.086 EAL: Main lcore 0 is ready (tid=7f2a9dd20a00;cpuset=[0]) 00:04:00.086 EAL: Trying to obtain current memory policy. 00:04:00.086 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.086 EAL: Restoring previous memory policy: 0 00:04:00.086 EAL: request: mp_malloc_sync 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: Heap on socket 0 was expanded by 2MB 00:04:00.086 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:00.086 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:00.086 EAL: Mem event callback 'spdk:(nil)' registered 00:04:00.086 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:00.086 00:04:00.086 00:04:00.086 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.086 http://cunit.sourceforge.net/ 00:04:00.086 00:04:00.086 00:04:00.086 Suite: components_suite 00:04:00.086 Test: vtophys_malloc_test ...passed 00:04:00.086 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:00.086 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.086 EAL: Restoring previous memory policy: 4 00:04:00.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.086 EAL: request: mp_malloc_sync 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: Heap on socket 0 was expanded by 4MB 00:04:00.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.086 EAL: request: mp_malloc_sync 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: Heap on socket 0 was shrunk by 4MB 00:04:00.086 EAL: Trying to obtain current memory policy. 00:04:00.086 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.086 EAL: Restoring previous memory policy: 4 00:04:00.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.086 EAL: request: mp_malloc_sync 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: Heap on socket 0 was expanded by 6MB 00:04:00.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.086 EAL: request: mp_malloc_sync 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: Heap on socket 0 was shrunk by 6MB 00:04:00.086 EAL: Trying to obtain current memory policy. 00:04:00.086 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.086 EAL: Restoring previous memory policy: 4 00:04:00.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.086 EAL: request: mp_malloc_sync 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: Heap on socket 0 was expanded by 10MB 00:04:00.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.086 EAL: request: mp_malloc_sync 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: Heap on socket 0 was shrunk by 10MB 00:04:00.086 EAL: Trying to obtain current memory policy. 00:04:00.086 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.086 EAL: Restoring previous memory policy: 4 00:04:00.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.086 EAL: request: mp_malloc_sync 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: Heap on socket 0 was expanded by 18MB 00:04:00.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.086 EAL: request: mp_malloc_sync 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: Heap on socket 0 was shrunk by 18MB 00:04:00.086 EAL: Trying to obtain current memory policy. 00:04:00.086 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.086 EAL: Restoring previous memory policy: 4 00:04:00.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.086 EAL: request: mp_malloc_sync 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: Heap on socket 0 was expanded by 34MB 00:04:00.086 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.086 EAL: request: mp_malloc_sync 00:04:00.086 EAL: No shared files mode enabled, IPC is disabled 00:04:00.086 EAL: Heap on socket 0 was shrunk by 34MB 00:04:00.086 EAL: Trying to obtain current memory policy. 00:04:00.086 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.345 EAL: Restoring previous memory policy: 4 00:04:00.345 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.345 EAL: request: mp_malloc_sync 00:04:00.345 EAL: No shared files mode enabled, IPC is disabled 00:04:00.345 EAL: Heap on socket 0 was expanded by 66MB 00:04:00.345 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.345 EAL: request: mp_malloc_sync 00:04:00.345 EAL: No shared files mode enabled, IPC is disabled 00:04:00.345 EAL: Heap on socket 0 was shrunk by 66MB 00:04:00.345 EAL: Trying to obtain current memory policy. 00:04:00.345 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.345 EAL: Restoring previous memory policy: 4 00:04:00.345 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.345 EAL: request: mp_malloc_sync 00:04:00.345 EAL: No shared files mode enabled, IPC is disabled 00:04:00.345 EAL: Heap on socket 0 was expanded by 130MB 00:04:00.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.346 EAL: request: mp_malloc_sync 00:04:00.346 EAL: No shared files mode enabled, IPC is disabled 00:04:00.346 EAL: Heap on socket 0 was shrunk by 130MB 00:04:00.346 EAL: Trying to obtain current memory policy. 00:04:00.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.346 EAL: Restoring previous memory policy: 4 00:04:00.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.346 EAL: request: mp_malloc_sync 00:04:00.346 EAL: No shared files mode enabled, IPC is disabled 00:04:00.346 EAL: Heap on socket 0 was expanded by 258MB 00:04:00.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.604 EAL: request: mp_malloc_sync 00:04:00.604 EAL: No shared files mode enabled, IPC is disabled 00:04:00.604 EAL: Heap on socket 0 was shrunk by 258MB 00:04:00.604 EAL: Trying to obtain current memory policy. 00:04:00.604 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.604 EAL: Restoring previous memory policy: 4 00:04:00.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.604 EAL: request: mp_malloc_sync 00:04:00.604 EAL: No shared files mode enabled, IPC is disabled 00:04:00.604 EAL: Heap on socket 0 was expanded by 514MB 00:04:00.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.863 EAL: request: mp_malloc_sync 00:04:00.863 EAL: No shared files mode enabled, IPC is disabled 00:04:00.863 EAL: Heap on socket 0 was shrunk by 514MB 00:04:00.863 EAL: Trying to obtain current memory policy. 00:04:00.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.122 EAL: Restoring previous memory policy: 4 00:04:01.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.122 EAL: request: mp_malloc_sync 00:04:01.122 EAL: No shared files mode enabled, IPC is disabled 00:04:01.122 EAL: Heap on socket 0 was expanded by 1026MB 00:04:01.381 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.381 passed 00:04:01.381 00:04:01.381 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.381 suites 1 1 n/a 0 0 00:04:01.381 tests 2 2 2 0 0 00:04:01.381 asserts 5484 5484 5484 0 n/a 00:04:01.381 00:04:01.381 Elapsed time = 1.268 seconds 00:04:01.381 EAL: request: mp_malloc_sync 00:04:01.381 EAL: No shared files mode enabled, IPC is disabled 00:04:01.381 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:01.381 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.381 EAL: request: mp_malloc_sync 00:04:01.381 EAL: No shared files mode enabled, IPC is disabled 00:04:01.381 EAL: Heap on socket 0 was shrunk by 2MB 00:04:01.381 EAL: No shared files mode enabled, IPC is disabled 00:04:01.381 EAL: No shared files mode enabled, IPC is disabled 00:04:01.381 EAL: No shared files mode enabled, IPC is disabled 00:04:01.381 00:04:01.381 real 0m1.482s 00:04:01.381 user 0m0.822s 00:04:01.381 sys 0m0.521s 00:04:01.381 12:07:32 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.381 12:07:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:01.381 ************************************ 00:04:01.381 END TEST env_vtophys 00:04:01.381 ************************************ 00:04:01.640 12:07:32 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:01.640 12:07:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.640 12:07:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.640 12:07:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.640 ************************************ 00:04:01.640 START TEST env_pci 00:04:01.640 ************************************ 00:04:01.640 12:07:32 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:01.640 00:04:01.640 00:04:01.640 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.640 http://cunit.sourceforge.net/ 00:04:01.640 00:04:01.640 00:04:01.640 Suite: pci 00:04:01.640 Test: pci_hook ...[2024-12-05 12:07:32.308605] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58308 has claimed it 00:04:01.640 passed 00:04:01.640 00:04:01.640 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.640 suites 1 1 n/a 0 0 00:04:01.640 tests 1 1 1 0 0 00:04:01.640 asserts 25 25 25 0 n/a 00:04:01.640 00:04:01.640 Elapsed time = 0.002 seconds 00:04:01.640 EAL: Cannot find device (10000:00:01.0) 00:04:01.640 EAL: Failed to attach device on primary process 00:04:01.640 00:04:01.640 real 0m0.023s 00:04:01.640 user 0m0.011s 00:04:01.640 sys 0m0.011s 00:04:01.640 12:07:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.640 12:07:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:01.640 ************************************ 00:04:01.640 END TEST env_pci 00:04:01.640 ************************************ 00:04:01.640 12:07:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:01.640 12:07:32 env -- env/env.sh@15 -- # uname 00:04:01.640 12:07:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:01.640 12:07:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:01.640 12:07:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.640 12:07:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:01.640 12:07:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.640 12:07:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.640 ************************************ 00:04:01.640 START TEST env_dpdk_post_init 00:04:01.640 ************************************ 00:04:01.640 12:07:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.640 EAL: Detected CPU lcores: 10 00:04:01.640 EAL: Detected NUMA nodes: 1 00:04:01.640 EAL: Detected shared linkage of DPDK 00:04:01.640 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.640 EAL: Selected IOVA mode 'PA' 00:04:01.900 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.900 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:01.900 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:01.900 Starting DPDK initialization... 00:04:01.900 Starting SPDK post initialization... 00:04:01.900 SPDK NVMe probe 00:04:01.900 Attaching to 0000:00:10.0 00:04:01.900 Attaching to 0000:00:11.0 00:04:01.900 Attached to 0000:00:10.0 00:04:01.900 Attached to 0000:00:11.0 00:04:01.900 Cleaning up... 00:04:01.900 00:04:01.900 real 0m0.178s 00:04:01.900 user 0m0.049s 00:04:01.900 sys 0m0.028s 00:04:01.900 12:07:32 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.900 12:07:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:01.900 ************************************ 00:04:01.900 END TEST env_dpdk_post_init 00:04:01.900 ************************************ 00:04:01.900 12:07:32 env -- env/env.sh@26 -- # uname 00:04:01.900 12:07:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:01.900 12:07:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:01.900 12:07:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.900 12:07:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.900 12:07:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.900 ************************************ 00:04:01.900 START TEST env_mem_callbacks 00:04:01.900 ************************************ 00:04:01.900 12:07:32 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:01.900 EAL: Detected CPU lcores: 10 00:04:01.900 EAL: Detected NUMA nodes: 1 00:04:01.900 EAL: Detected shared linkage of DPDK 00:04:01.900 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.900 EAL: Selected IOVA mode 'PA' 00:04:01.900 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.900 00:04:01.900 00:04:01.900 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.900 http://cunit.sourceforge.net/ 00:04:01.900 00:04:01.900 00:04:01.900 Suite: memory 00:04:01.900 Test: test ... 00:04:01.900 register 0x200000200000 2097152 00:04:01.900 malloc 3145728 00:04:01.900 register 0x200000400000 4194304 00:04:01.900 buf 0x200000500000 len 3145728 PASSED 00:04:01.900 malloc 64 00:04:01.900 buf 0x2000004fff40 len 64 PASSED 00:04:01.900 malloc 4194304 00:04:01.900 register 0x200000800000 6291456 00:04:01.900 buf 0x200000a00000 len 4194304 PASSED 00:04:01.900 free 0x200000500000 3145728 00:04:01.900 free 0x2000004fff40 64 00:04:01.900 unregister 0x200000400000 4194304 PASSED 00:04:01.900 free 0x200000a00000 4194304 00:04:01.900 unregister 0x200000800000 6291456 PASSED 00:04:01.900 malloc 8388608 00:04:01.900 register 0x200000400000 10485760 00:04:01.900 buf 0x200000600000 len 8388608 PASSED 00:04:01.900 free 0x200000600000 8388608 00:04:01.900 unregister 0x200000400000 10485760 PASSED 00:04:01.900 passed 00:04:01.900 00:04:01.900 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.900 suites 1 1 n/a 0 0 00:04:01.900 tests 1 1 1 0 0 00:04:01.900 asserts 15 15 15 0 n/a 00:04:01.900 00:04:01.900 Elapsed time = 0.009 seconds 00:04:01.900 ************************************ 00:04:01.900 END TEST env_mem_callbacks 00:04:01.900 00:04:01.900 real 0m0.141s 00:04:01.900 user 0m0.018s 00:04:01.900 sys 0m0.021s 00:04:01.900 12:07:32 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.900 12:07:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:01.900 ************************************ 00:04:02.159 ************************************ 00:04:02.159 END TEST env 00:04:02.159 ************************************ 00:04:02.159 00:04:02.159 real 0m2.560s 00:04:02.159 user 0m1.318s 00:04:02.159 sys 0m0.864s 00:04:02.159 12:07:32 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.159 12:07:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.159 12:07:32 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:02.159 12:07:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.159 12:07:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.159 12:07:32 -- common/autotest_common.sh@10 -- # set +x 00:04:02.159 ************************************ 00:04:02.159 START TEST rpc 00:04:02.159 ************************************ 00:04:02.159 12:07:32 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:02.159 * Looking for test storage... 00:04:02.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:02.159 12:07:32 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.159 12:07:32 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.159 12:07:32 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.418 12:07:33 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.418 12:07:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.418 12:07:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.418 12:07:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.418 12:07:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.418 12:07:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.418 12:07:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.418 12:07:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.418 12:07:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.418 12:07:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.418 12:07:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.418 12:07:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.418 12:07:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:02.418 12:07:33 rpc -- scripts/common.sh@345 -- # : 1 00:04:02.418 12:07:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.418 12:07:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.418 12:07:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:02.418 12:07:33 rpc -- scripts/common.sh@353 -- # local d=1 00:04:02.418 12:07:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.418 12:07:33 rpc -- scripts/common.sh@355 -- # echo 1 00:04:02.418 12:07:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.418 12:07:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:02.418 12:07:33 rpc -- scripts/common.sh@353 -- # local d=2 00:04:02.418 12:07:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.418 12:07:33 rpc -- scripts/common.sh@355 -- # echo 2 00:04:02.418 12:07:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.418 12:07:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.418 12:07:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.418 12:07:33 rpc -- scripts/common.sh@368 -- # return 0 00:04:02.418 12:07:33 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.418 12:07:33 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.418 --rc genhtml_branch_coverage=1 00:04:02.418 --rc genhtml_function_coverage=1 00:04:02.418 --rc genhtml_legend=1 00:04:02.418 --rc geninfo_all_blocks=1 00:04:02.418 --rc geninfo_unexecuted_blocks=1 00:04:02.418 00:04:02.418 ' 00:04:02.418 12:07:33 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.418 --rc genhtml_branch_coverage=1 00:04:02.418 --rc genhtml_function_coverage=1 00:04:02.418 --rc genhtml_legend=1 00:04:02.418 --rc geninfo_all_blocks=1 00:04:02.418 --rc geninfo_unexecuted_blocks=1 00:04:02.418 00:04:02.418 ' 00:04:02.418 12:07:33 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.418 --rc genhtml_branch_coverage=1 00:04:02.418 --rc genhtml_function_coverage=1 00:04:02.418 --rc genhtml_legend=1 00:04:02.418 --rc geninfo_all_blocks=1 00:04:02.418 --rc geninfo_unexecuted_blocks=1 00:04:02.418 00:04:02.418 ' 00:04:02.418 12:07:33 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.418 --rc genhtml_branch_coverage=1 00:04:02.418 --rc genhtml_function_coverage=1 00:04:02.418 --rc genhtml_legend=1 00:04:02.418 --rc geninfo_all_blocks=1 00:04:02.418 --rc geninfo_unexecuted_blocks=1 00:04:02.418 00:04:02.418 ' 00:04:02.418 12:07:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58426 00:04:02.418 12:07:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.418 12:07:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58426 00:04:02.418 12:07:33 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:02.418 12:07:33 rpc -- common/autotest_common.sh@835 -- # '[' -z 58426 ']' 00:04:02.418 12:07:33 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.418 12:07:33 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.418 12:07:33 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.418 12:07:33 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.418 12:07:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.418 [2024-12-05 12:07:33.141635] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:02.418 [2024-12-05 12:07:33.141948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58426 ] 00:04:02.677 [2024-12-05 12:07:33.295664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.677 [2024-12-05 12:07:33.348460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:02.677 [2024-12-05 12:07:33.348824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58426' to capture a snapshot of events at runtime. 00:04:02.677 [2024-12-05 12:07:33.348990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:02.677 [2024-12-05 12:07:33.349133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:02.677 [2024-12-05 12:07:33.349183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58426 for offline analysis/debug. 00:04:02.677 [2024-12-05 12:07:33.349868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.936 12:07:33 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.936 12:07:33 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:02.936 12:07:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:02.936 12:07:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:02.936 12:07:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:02.936 12:07:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:02.936 12:07:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.936 12:07:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.936 12:07:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.936 ************************************ 00:04:02.936 START TEST rpc_integrity 00:04:02.936 ************************************ 00:04:02.936 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:02.936 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:02.936 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.936 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.936 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.936 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:02.936 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:02.936 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.936 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.936 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.936 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.936 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.936 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:02.936 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:02.936 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.936 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.936 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.936 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.936 { 00:04:02.936 "aliases": [ 00:04:02.936 "97227003-7cc9-4fe2-8adf-01275ea5df1f" 00:04:02.936 ], 00:04:02.936 "assigned_rate_limits": { 00:04:02.936 "r_mbytes_per_sec": 0, 00:04:02.936 "rw_ios_per_sec": 0, 00:04:02.936 "rw_mbytes_per_sec": 0, 00:04:02.936 "w_mbytes_per_sec": 0 00:04:02.936 }, 00:04:02.936 "block_size": 512, 00:04:02.936 "claimed": false, 00:04:02.936 "driver_specific": {}, 00:04:02.936 "memory_domains": [ 00:04:02.936 { 00:04:02.936 "dma_device_id": "system", 00:04:02.936 "dma_device_type": 1 00:04:02.936 }, 00:04:02.936 { 00:04:02.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.936 "dma_device_type": 2 00:04:02.936 } 00:04:02.936 ], 00:04:02.936 "name": "Malloc0", 00:04:02.936 "num_blocks": 16384, 00:04:02.936 "product_name": "Malloc disk", 00:04:02.936 "supported_io_types": { 00:04:02.936 "abort": true, 00:04:02.936 "compare": false, 00:04:02.936 "compare_and_write": false, 00:04:02.936 "copy": true, 00:04:02.936 "flush": true, 00:04:02.936 "get_zone_info": false, 00:04:02.936 "nvme_admin": false, 00:04:02.936 "nvme_io": false, 00:04:02.936 "nvme_io_md": false, 00:04:02.936 "nvme_iov_md": false, 00:04:02.936 "read": true, 00:04:02.936 "reset": true, 00:04:02.936 "seek_data": false, 00:04:02.936 "seek_hole": false, 00:04:02.936 "unmap": true, 00:04:02.936 "write": true, 00:04:02.936 "write_zeroes": true, 00:04:02.936 "zcopy": true, 00:04:02.936 "zone_append": false, 00:04:02.936 "zone_management": false 00:04:02.936 }, 00:04:02.936 "uuid": "97227003-7cc9-4fe2-8adf-01275ea5df1f", 00:04:02.936 "zoned": false 00:04:02.936 } 00:04:02.936 ]' 00:04:02.936 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:03.195 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:03.195 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:03.195 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.195 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.195 [2024-12-05 12:07:33.828488] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:03.195 [2024-12-05 12:07:33.828732] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.195 [2024-12-05 12:07:33.828770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26308a0 00:04:03.195 [2024-12-05 12:07:33.828780] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.195 [2024-12-05 12:07:33.830640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.195 [2024-12-05 12:07:33.830709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:03.195 Passthru0 00:04:03.195 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.195 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:03.195 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.195 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.195 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.195 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.195 { 00:04:03.195 "aliases": [ 00:04:03.195 "97227003-7cc9-4fe2-8adf-01275ea5df1f" 00:04:03.195 ], 00:04:03.195 "assigned_rate_limits": { 00:04:03.195 "r_mbytes_per_sec": 0, 00:04:03.195 "rw_ios_per_sec": 0, 00:04:03.195 "rw_mbytes_per_sec": 0, 00:04:03.195 "w_mbytes_per_sec": 0 00:04:03.195 }, 00:04:03.195 "block_size": 512, 00:04:03.195 "claim_type": "exclusive_write", 00:04:03.196 "claimed": true, 00:04:03.196 "driver_specific": {}, 00:04:03.196 "memory_domains": [ 00:04:03.196 { 00:04:03.196 "dma_device_id": "system", 00:04:03.196 "dma_device_type": 1 00:04:03.196 }, 00:04:03.196 { 00:04:03.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.196 "dma_device_type": 2 00:04:03.196 } 00:04:03.196 ], 00:04:03.196 "name": "Malloc0", 00:04:03.196 "num_blocks": 16384, 00:04:03.196 "product_name": "Malloc disk", 00:04:03.196 "supported_io_types": { 00:04:03.196 "abort": true, 00:04:03.196 "compare": false, 00:04:03.196 "compare_and_write": false, 00:04:03.196 "copy": true, 00:04:03.196 "flush": true, 00:04:03.196 "get_zone_info": false, 00:04:03.196 "nvme_admin": false, 00:04:03.196 "nvme_io": false, 00:04:03.196 "nvme_io_md": false, 00:04:03.196 "nvme_iov_md": false, 00:04:03.196 "read": true, 00:04:03.196 "reset": true, 00:04:03.196 "seek_data": false, 00:04:03.196 "seek_hole": false, 00:04:03.196 "unmap": true, 00:04:03.196 "write": true, 00:04:03.196 "write_zeroes": true, 00:04:03.196 "zcopy": true, 00:04:03.196 "zone_append": false, 00:04:03.196 "zone_management": false 00:04:03.196 }, 00:04:03.196 "uuid": "97227003-7cc9-4fe2-8adf-01275ea5df1f", 00:04:03.196 "zoned": false 00:04:03.196 }, 00:04:03.196 { 00:04:03.196 "aliases": [ 00:04:03.196 "d990b659-35b6-53ec-bbdc-a515534e8343" 00:04:03.196 ], 00:04:03.196 "assigned_rate_limits": { 00:04:03.196 "r_mbytes_per_sec": 0, 00:04:03.196 "rw_ios_per_sec": 0, 00:04:03.196 "rw_mbytes_per_sec": 0, 00:04:03.196 "w_mbytes_per_sec": 0 00:04:03.196 }, 00:04:03.196 "block_size": 512, 00:04:03.196 "claimed": false, 00:04:03.196 "driver_specific": { 00:04:03.196 "passthru": { 00:04:03.196 "base_bdev_name": "Malloc0", 00:04:03.196 "name": "Passthru0" 00:04:03.196 } 00:04:03.196 }, 00:04:03.196 "memory_domains": [ 00:04:03.196 { 00:04:03.196 "dma_device_id": "system", 00:04:03.196 "dma_device_type": 1 00:04:03.196 }, 00:04:03.196 { 00:04:03.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.196 "dma_device_type": 2 00:04:03.196 } 00:04:03.196 ], 00:04:03.196 "name": "Passthru0", 00:04:03.196 "num_blocks": 16384, 00:04:03.196 "product_name": "passthru", 00:04:03.196 "supported_io_types": { 00:04:03.196 "abort": true, 00:04:03.196 "compare": false, 00:04:03.196 "compare_and_write": false, 00:04:03.196 "copy": true, 00:04:03.196 "flush": true, 00:04:03.196 "get_zone_info": false, 00:04:03.196 "nvme_admin": false, 00:04:03.196 "nvme_io": false, 00:04:03.196 "nvme_io_md": false, 00:04:03.196 "nvme_iov_md": false, 00:04:03.196 "read": true, 00:04:03.196 "reset": true, 00:04:03.196 "seek_data": false, 00:04:03.196 "seek_hole": false, 00:04:03.196 "unmap": true, 00:04:03.196 "write": true, 00:04:03.196 "write_zeroes": true, 00:04:03.196 "zcopy": true, 00:04:03.196 "zone_append": false, 00:04:03.196 "zone_management": false 00:04:03.196 }, 00:04:03.196 "uuid": "d990b659-35b6-53ec-bbdc-a515534e8343", 00:04:03.196 "zoned": false 00:04:03.196 } 00:04:03.196 ]' 00:04:03.196 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.196 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.196 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.196 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.196 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.196 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.196 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:03.196 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.196 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.196 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.196 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.196 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.196 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.196 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.196 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.196 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.196 ************************************ 00:04:03.196 END TEST rpc_integrity 00:04:03.196 ************************************ 00:04:03.196 12:07:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.196 00:04:03.196 real 0m0.326s 00:04:03.196 user 0m0.214s 00:04:03.196 sys 0m0.036s 00:04:03.196 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.196 12:07:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.196 12:07:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:03.196 12:07:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.196 12:07:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.196 12:07:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.196 ************************************ 00:04:03.196 START TEST rpc_plugins 00:04:03.196 ************************************ 00:04:03.196 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:03.196 12:07:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:03.196 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.196 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.455 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.455 12:07:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:03.455 12:07:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:03.455 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.455 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.455 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.455 12:07:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:03.455 { 00:04:03.455 "aliases": [ 00:04:03.455 "0935cc69-2467-42ba-ba7b-45a0744ea637" 00:04:03.455 ], 00:04:03.455 "assigned_rate_limits": { 00:04:03.455 "r_mbytes_per_sec": 0, 00:04:03.455 "rw_ios_per_sec": 0, 00:04:03.456 "rw_mbytes_per_sec": 0, 00:04:03.456 "w_mbytes_per_sec": 0 00:04:03.456 }, 00:04:03.456 "block_size": 4096, 00:04:03.456 "claimed": false, 00:04:03.456 "driver_specific": {}, 00:04:03.456 "memory_domains": [ 00:04:03.456 { 00:04:03.456 "dma_device_id": "system", 00:04:03.456 "dma_device_type": 1 00:04:03.456 }, 00:04:03.456 { 00:04:03.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.456 "dma_device_type": 2 00:04:03.456 } 00:04:03.456 ], 00:04:03.456 "name": "Malloc1", 00:04:03.456 "num_blocks": 256, 00:04:03.456 "product_name": "Malloc disk", 00:04:03.456 "supported_io_types": { 00:04:03.456 "abort": true, 00:04:03.456 "compare": false, 00:04:03.456 "compare_and_write": false, 00:04:03.456 "copy": true, 00:04:03.456 "flush": true, 00:04:03.456 "get_zone_info": false, 00:04:03.456 "nvme_admin": false, 00:04:03.456 "nvme_io": false, 00:04:03.456 "nvme_io_md": false, 00:04:03.456 "nvme_iov_md": false, 00:04:03.456 "read": true, 00:04:03.456 "reset": true, 00:04:03.456 "seek_data": false, 00:04:03.456 "seek_hole": false, 00:04:03.456 "unmap": true, 00:04:03.456 "write": true, 00:04:03.456 "write_zeroes": true, 00:04:03.456 "zcopy": true, 00:04:03.456 "zone_append": false, 00:04:03.456 "zone_management": false 00:04:03.456 }, 00:04:03.456 "uuid": "0935cc69-2467-42ba-ba7b-45a0744ea637", 00:04:03.456 "zoned": false 00:04:03.456 } 00:04:03.456 ]' 00:04:03.456 12:07:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:03.456 12:07:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:03.456 12:07:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:03.456 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.456 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.456 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.456 12:07:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:03.456 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.456 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.456 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.456 12:07:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:03.456 12:07:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:03.456 ************************************ 00:04:03.456 END TEST rpc_plugins 00:04:03.456 ************************************ 00:04:03.456 12:07:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:03.456 00:04:03.456 real 0m0.173s 00:04:03.456 user 0m0.108s 00:04:03.456 sys 0m0.024s 00:04:03.456 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.456 12:07:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.456 12:07:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:03.456 12:07:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.456 12:07:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.456 12:07:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.456 ************************************ 00:04:03.456 START TEST rpc_trace_cmd_test 00:04:03.456 ************************************ 00:04:03.456 12:07:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:03.456 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:03.456 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:03.456 12:07:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.456 12:07:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.456 12:07:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.456 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:03.456 "bdev": { 00:04:03.456 "mask": "0x8", 00:04:03.456 "tpoint_mask": "0xffffffffffffffff" 00:04:03.456 }, 00:04:03.456 "bdev_nvme": { 00:04:03.456 "mask": "0x4000", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "bdev_raid": { 00:04:03.456 "mask": "0x20000", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "blob": { 00:04:03.456 "mask": "0x10000", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "blobfs": { 00:04:03.456 "mask": "0x80", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "dsa": { 00:04:03.456 "mask": "0x200", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "ftl": { 00:04:03.456 "mask": "0x40", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "iaa": { 00:04:03.456 "mask": "0x1000", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "iscsi_conn": { 00:04:03.456 "mask": "0x2", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "nvme_pcie": { 00:04:03.456 "mask": "0x800", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "nvme_tcp": { 00:04:03.456 "mask": "0x2000", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "nvmf_rdma": { 00:04:03.456 "mask": "0x10", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "nvmf_tcp": { 00:04:03.456 "mask": "0x20", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "scheduler": { 00:04:03.456 "mask": "0x40000", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "scsi": { 00:04:03.456 "mask": "0x4", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "sock": { 00:04:03.456 "mask": "0x8000", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "thread": { 00:04:03.456 "mask": "0x400", 00:04:03.456 "tpoint_mask": "0x0" 00:04:03.456 }, 00:04:03.456 "tpoint_group_mask": "0x8", 00:04:03.456 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58426" 00:04:03.456 }' 00:04:03.456 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:03.715 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:03.715 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:03.715 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:03.715 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:03.715 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:03.715 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:03.715 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:03.715 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:03.715 ************************************ 00:04:03.715 END TEST rpc_trace_cmd_test 00:04:03.715 ************************************ 00:04:03.715 12:07:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:03.715 00:04:03.715 real 0m0.266s 00:04:03.715 user 0m0.239s 00:04:03.715 sys 0m0.018s 00:04:03.715 12:07:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.715 12:07:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.974 12:07:34 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:03.974 12:07:34 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:03.974 12:07:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.974 12:07:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.974 12:07:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.974 ************************************ 00:04:03.974 START TEST go_rpc 00:04:03.974 ************************************ 00:04:03.974 12:07:34 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:03.974 12:07:34 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.974 12:07:34 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.974 12:07:34 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["b58e3720-3a60-457d-8121-83e678e7dd39"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"b58e3720-3a60-457d-8121-83e678e7dd39","zoned":false}]' 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:03.974 12:07:34 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.974 12:07:34 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.974 12:07:34 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:03.974 ************************************ 00:04:03.974 END TEST go_rpc 00:04:03.974 ************************************ 00:04:03.974 12:07:34 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:03.974 00:04:03.974 real 0m0.234s 00:04:03.974 user 0m0.163s 00:04:03.974 sys 0m0.035s 00:04:03.974 12:07:34 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.974 12:07:34 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.242 12:07:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:04.242 12:07:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:04.242 12:07:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.242 12:07:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.242 12:07:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.242 ************************************ 00:04:04.242 START TEST rpc_daemon_integrity 00:04:04.242 ************************************ 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.242 12:07:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.243 12:07:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.243 12:07:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:04.243 { 00:04:04.243 "aliases": [ 00:04:04.243 "76f5d140-8b30-48c4-8043-8d8f660db704" 00:04:04.243 ], 00:04:04.243 "assigned_rate_limits": { 00:04:04.243 "r_mbytes_per_sec": 0, 00:04:04.243 "rw_ios_per_sec": 0, 00:04:04.243 "rw_mbytes_per_sec": 0, 00:04:04.243 "w_mbytes_per_sec": 0 00:04:04.243 }, 00:04:04.243 "block_size": 512, 00:04:04.243 "claimed": false, 00:04:04.243 "driver_specific": {}, 00:04:04.243 "memory_domains": [ 00:04:04.243 { 00:04:04.243 "dma_device_id": "system", 00:04:04.243 "dma_device_type": 1 00:04:04.243 }, 00:04:04.243 { 00:04:04.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.243 "dma_device_type": 2 00:04:04.243 } 00:04:04.243 ], 00:04:04.243 "name": "Malloc3", 00:04:04.243 "num_blocks": 16384, 00:04:04.243 "product_name": "Malloc disk", 00:04:04.243 "supported_io_types": { 00:04:04.243 "abort": true, 00:04:04.243 "compare": false, 00:04:04.243 "compare_and_write": false, 00:04:04.243 "copy": true, 00:04:04.243 "flush": true, 00:04:04.243 "get_zone_info": false, 00:04:04.243 "nvme_admin": false, 00:04:04.243 "nvme_io": false, 00:04:04.243 "nvme_io_md": false, 00:04:04.243 "nvme_iov_md": false, 00:04:04.243 "read": true, 00:04:04.243 "reset": true, 00:04:04.243 "seek_data": false, 00:04:04.243 "seek_hole": false, 00:04:04.243 "unmap": true, 00:04:04.243 "write": true, 00:04:04.243 "write_zeroes": true, 00:04:04.243 "zcopy": true, 00:04:04.243 "zone_append": false, 00:04:04.243 "zone_management": false 00:04:04.243 }, 00:04:04.243 "uuid": "76f5d140-8b30-48c4-8043-8d8f660db704", 00:04:04.243 "zoned": false 00:04:04.243 } 00:04:04.243 ]' 00:04:04.243 12:07:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:04.243 12:07:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:04.243 12:07:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:04.243 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.243 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.243 [2024-12-05 12:07:35.045607] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:04.243 [2024-12-05 12:07:35.045680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:04.243 [2024-12-05 12:07:35.045697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24f0530 00:04:04.243 [2024-12-05 12:07:35.045706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:04.243 [2024-12-05 12:07:35.046995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:04.243 [2024-12-05 12:07:35.047042] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:04.243 Passthru0 00:04:04.243 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.243 12:07:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:04.243 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.243 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.243 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.243 12:07:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:04.243 { 00:04:04.243 "aliases": [ 00:04:04.243 "76f5d140-8b30-48c4-8043-8d8f660db704" 00:04:04.243 ], 00:04:04.243 "assigned_rate_limits": { 00:04:04.243 "r_mbytes_per_sec": 0, 00:04:04.243 "rw_ios_per_sec": 0, 00:04:04.243 "rw_mbytes_per_sec": 0, 00:04:04.243 "w_mbytes_per_sec": 0 00:04:04.243 }, 00:04:04.243 "block_size": 512, 00:04:04.243 "claim_type": "exclusive_write", 00:04:04.243 "claimed": true, 00:04:04.243 "driver_specific": {}, 00:04:04.243 "memory_domains": [ 00:04:04.243 { 00:04:04.243 "dma_device_id": "system", 00:04:04.243 "dma_device_type": 1 00:04:04.243 }, 00:04:04.243 { 00:04:04.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.243 "dma_device_type": 2 00:04:04.243 } 00:04:04.243 ], 00:04:04.243 "name": "Malloc3", 00:04:04.243 "num_blocks": 16384, 00:04:04.243 "product_name": "Malloc disk", 00:04:04.243 "supported_io_types": { 00:04:04.243 "abort": true, 00:04:04.243 "compare": false, 00:04:04.243 "compare_and_write": false, 00:04:04.243 "copy": true, 00:04:04.243 "flush": true, 00:04:04.243 "get_zone_info": false, 00:04:04.243 "nvme_admin": false, 00:04:04.243 "nvme_io": false, 00:04:04.243 "nvme_io_md": false, 00:04:04.243 "nvme_iov_md": false, 00:04:04.243 "read": true, 00:04:04.243 "reset": true, 00:04:04.243 "seek_data": false, 00:04:04.243 "seek_hole": false, 00:04:04.243 "unmap": true, 00:04:04.243 "write": true, 00:04:04.243 "write_zeroes": true, 00:04:04.243 "zcopy": true, 00:04:04.243 "zone_append": false, 00:04:04.243 "zone_management": false 00:04:04.243 }, 00:04:04.243 "uuid": "76f5d140-8b30-48c4-8043-8d8f660db704", 00:04:04.243 "zoned": false 00:04:04.243 }, 00:04:04.243 { 00:04:04.243 "aliases": [ 00:04:04.243 "fbb18ef6-bfcb-5635-8e11-273eb65bad7e" 00:04:04.243 ], 00:04:04.243 "assigned_rate_limits": { 00:04:04.243 "r_mbytes_per_sec": 0, 00:04:04.243 "rw_ios_per_sec": 0, 00:04:04.243 "rw_mbytes_per_sec": 0, 00:04:04.243 "w_mbytes_per_sec": 0 00:04:04.243 }, 00:04:04.243 "block_size": 512, 00:04:04.243 "claimed": false, 00:04:04.243 "driver_specific": { 00:04:04.243 "passthru": { 00:04:04.243 "base_bdev_name": "Malloc3", 00:04:04.243 "name": "Passthru0" 00:04:04.243 } 00:04:04.243 }, 00:04:04.243 "memory_domains": [ 00:04:04.243 { 00:04:04.243 "dma_device_id": "system", 00:04:04.243 "dma_device_type": 1 00:04:04.243 }, 00:04:04.243 { 00:04:04.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:04.243 "dma_device_type": 2 00:04:04.243 } 00:04:04.243 ], 00:04:04.243 "name": "Passthru0", 00:04:04.243 "num_blocks": 16384, 00:04:04.243 "product_name": "passthru", 00:04:04.243 "supported_io_types": { 00:04:04.243 "abort": true, 00:04:04.243 "compare": false, 00:04:04.243 "compare_and_write": false, 00:04:04.243 "copy": true, 00:04:04.243 "flush": true, 00:04:04.243 "get_zone_info": false, 00:04:04.243 "nvme_admin": false, 00:04:04.243 "nvme_io": false, 00:04:04.243 "nvme_io_md": false, 00:04:04.243 "nvme_iov_md": false, 00:04:04.243 "read": true, 00:04:04.243 "reset": true, 00:04:04.243 "seek_data": false, 00:04:04.243 "seek_hole": false, 00:04:04.243 "unmap": true, 00:04:04.243 "write": true, 00:04:04.243 "write_zeroes": true, 00:04:04.243 "zcopy": true, 00:04:04.243 "zone_append": false, 00:04:04.243 "zone_management": false 00:04:04.243 }, 00:04:04.243 "uuid": "fbb18ef6-bfcb-5635-8e11-273eb65bad7e", 00:04:04.243 "zoned": false 00:04:04.243 } 00:04:04.243 ]' 00:04:04.243 12:07:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:04.501 ************************************ 00:04:04.501 END TEST rpc_daemon_integrity 00:04:04.501 ************************************ 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:04.501 00:04:04.501 real 0m0.325s 00:04:04.501 user 0m0.216s 00:04:04.501 sys 0m0.044s 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.501 12:07:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:04.501 12:07:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:04.501 12:07:35 rpc -- rpc/rpc.sh@84 -- # killprocess 58426 00:04:04.501 12:07:35 rpc -- common/autotest_common.sh@954 -- # '[' -z 58426 ']' 00:04:04.501 12:07:35 rpc -- common/autotest_common.sh@958 -- # kill -0 58426 00:04:04.501 12:07:35 rpc -- common/autotest_common.sh@959 -- # uname 00:04:04.501 12:07:35 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.501 12:07:35 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58426 00:04:04.501 killing process with pid 58426 00:04:04.501 12:07:35 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.501 12:07:35 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.501 12:07:35 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58426' 00:04:04.501 12:07:35 rpc -- common/autotest_common.sh@973 -- # kill 58426 00:04:04.501 12:07:35 rpc -- common/autotest_common.sh@978 -- # wait 58426 00:04:05.068 00:04:05.068 real 0m2.799s 00:04:05.068 user 0m3.629s 00:04:05.068 sys 0m0.740s 00:04:05.068 12:07:35 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.068 ************************************ 00:04:05.068 END TEST rpc 00:04:05.068 12:07:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.068 ************************************ 00:04:05.068 12:07:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:05.068 12:07:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.068 12:07:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.068 12:07:35 -- common/autotest_common.sh@10 -- # set +x 00:04:05.068 ************************************ 00:04:05.068 START TEST skip_rpc 00:04:05.068 ************************************ 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:05.068 * Looking for test storage... 00:04:05.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.068 12:07:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:05.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.068 --rc genhtml_branch_coverage=1 00:04:05.068 --rc genhtml_function_coverage=1 00:04:05.068 --rc genhtml_legend=1 00:04:05.068 --rc geninfo_all_blocks=1 00:04:05.068 --rc geninfo_unexecuted_blocks=1 00:04:05.068 00:04:05.068 ' 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:05.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.068 --rc genhtml_branch_coverage=1 00:04:05.068 --rc genhtml_function_coverage=1 00:04:05.068 --rc genhtml_legend=1 00:04:05.068 --rc geninfo_all_blocks=1 00:04:05.068 --rc geninfo_unexecuted_blocks=1 00:04:05.068 00:04:05.068 ' 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:05.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.068 --rc genhtml_branch_coverage=1 00:04:05.068 --rc genhtml_function_coverage=1 00:04:05.068 --rc genhtml_legend=1 00:04:05.068 --rc geninfo_all_blocks=1 00:04:05.068 --rc geninfo_unexecuted_blocks=1 00:04:05.068 00:04:05.068 ' 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:05.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.068 --rc genhtml_branch_coverage=1 00:04:05.068 --rc genhtml_function_coverage=1 00:04:05.068 --rc genhtml_legend=1 00:04:05.068 --rc geninfo_all_blocks=1 00:04:05.068 --rc geninfo_unexecuted_blocks=1 00:04:05.068 00:04:05.068 ' 00:04:05.068 12:07:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:05.068 12:07:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:05.068 12:07:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.068 12:07:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.068 ************************************ 00:04:05.068 START TEST skip_rpc 00:04:05.068 ************************************ 00:04:05.068 12:07:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:05.068 12:07:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58687 00:04:05.068 12:07:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:05.068 12:07:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:05.068 12:07:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:05.327 [2024-12-05 12:07:35.959938] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:05.327 [2024-12-05 12:07:35.960030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58687 ] 00:04:05.327 [2024-12-05 12:07:36.104990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.327 [2024-12-05 12:07:36.148784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.595 2024/12/05 12:07:40 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58687 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58687 ']' 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58687 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58687 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:10.595 killing process with pid 58687 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58687' 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58687 00:04:10.595 12:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58687 00:04:10.595 00:04:10.595 real 0m5.417s 00:04:10.595 user 0m5.044s 00:04:10.595 sys 0m0.286s 00:04:10.595 12:07:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.595 12:07:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.595 ************************************ 00:04:10.595 END TEST skip_rpc 00:04:10.595 ************************************ 00:04:10.595 12:07:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:10.595 12:07:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.595 12:07:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.595 12:07:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.595 ************************************ 00:04:10.595 START TEST skip_rpc_with_json 00:04:10.595 ************************************ 00:04:10.595 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:10.595 12:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:10.595 12:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58774 00:04:10.595 12:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.595 12:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58774 00:04:10.595 12:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:10.595 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58774 ']' 00:04:10.595 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.595 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.595 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.595 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.595 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.595 [2024-12-05 12:07:41.430548] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:10.595 [2024-12-05 12:07:41.430638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58774 ] 00:04:10.854 [2024-12-05 12:07:41.576208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.854 [2024-12-05 12:07:41.620162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.113 [2024-12-05 12:07:41.888402] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:11.113 2024/12/05 12:07:41 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:11.113 request: 00:04:11.113 { 00:04:11.113 "method": "nvmf_get_transports", 00:04:11.113 "params": { 00:04:11.113 "trtype": "tcp" 00:04:11.113 } 00:04:11.113 } 00:04:11.113 Got JSON-RPC error response 00:04:11.113 GoRPCClient: error on JSON-RPC call 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.113 [2024-12-05 12:07:41.900531] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.113 12:07:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.372 12:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.372 12:07:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:11.372 { 00:04:11.372 "subsystems": [ 00:04:11.372 { 00:04:11.372 "subsystem": "fsdev", 00:04:11.372 "config": [ 00:04:11.372 { 00:04:11.372 "method": "fsdev_set_opts", 00:04:11.372 "params": { 00:04:11.372 "fsdev_io_cache_size": 256, 00:04:11.372 "fsdev_io_pool_size": 65535 00:04:11.372 } 00:04:11.372 } 00:04:11.372 ] 00:04:11.372 }, 00:04:11.372 { 00:04:11.372 "subsystem": "keyring", 00:04:11.372 "config": [] 00:04:11.372 }, 00:04:11.372 { 00:04:11.372 "subsystem": "iobuf", 00:04:11.372 "config": [ 00:04:11.372 { 00:04:11.372 "method": "iobuf_set_options", 00:04:11.372 "params": { 00:04:11.372 "enable_numa": false, 00:04:11.372 "large_bufsize": 135168, 00:04:11.372 "large_pool_count": 1024, 00:04:11.372 "small_bufsize": 8192, 00:04:11.372 "small_pool_count": 8192 00:04:11.372 } 00:04:11.372 } 00:04:11.372 ] 00:04:11.372 }, 00:04:11.372 { 00:04:11.372 "subsystem": "sock", 00:04:11.372 "config": [ 00:04:11.372 { 00:04:11.372 "method": "sock_set_default_impl", 00:04:11.372 "params": { 00:04:11.372 "impl_name": "posix" 00:04:11.372 } 00:04:11.372 }, 00:04:11.372 { 00:04:11.372 "method": "sock_impl_set_options", 00:04:11.372 "params": { 00:04:11.372 "enable_ktls": false, 00:04:11.372 "enable_placement_id": 0, 00:04:11.372 "enable_quickack": false, 00:04:11.372 "enable_recv_pipe": true, 00:04:11.372 "enable_zerocopy_send_client": false, 00:04:11.372 "enable_zerocopy_send_server": true, 00:04:11.372 "impl_name": "ssl", 00:04:11.372 "recv_buf_size": 4096, 00:04:11.372 "send_buf_size": 4096, 00:04:11.372 "tls_version": 0, 00:04:11.372 "zerocopy_threshold": 0 00:04:11.372 } 00:04:11.372 }, 00:04:11.372 { 00:04:11.372 "method": "sock_impl_set_options", 00:04:11.372 "params": { 00:04:11.372 "enable_ktls": false, 00:04:11.372 "enable_placement_id": 0, 00:04:11.372 "enable_quickack": false, 00:04:11.372 "enable_recv_pipe": true, 00:04:11.372 "enable_zerocopy_send_client": false, 00:04:11.372 "enable_zerocopy_send_server": true, 00:04:11.372 "impl_name": "posix", 00:04:11.372 "recv_buf_size": 2097152, 00:04:11.372 "send_buf_size": 2097152, 00:04:11.372 "tls_version": 0, 00:04:11.372 "zerocopy_threshold": 0 00:04:11.372 } 00:04:11.372 } 00:04:11.372 ] 00:04:11.372 }, 00:04:11.372 { 00:04:11.372 "subsystem": "vmd", 00:04:11.372 "config": [] 00:04:11.372 }, 00:04:11.372 { 00:04:11.372 "subsystem": "accel", 00:04:11.372 "config": [ 00:04:11.372 { 00:04:11.372 "method": "accel_set_options", 00:04:11.372 "params": { 00:04:11.372 "buf_count": 2048, 00:04:11.372 "large_cache_size": 16, 00:04:11.372 "sequence_count": 2048, 00:04:11.372 "small_cache_size": 128, 00:04:11.372 "task_count": 2048 00:04:11.372 } 00:04:11.372 } 00:04:11.372 ] 00:04:11.372 }, 00:04:11.372 { 00:04:11.372 "subsystem": "bdev", 00:04:11.372 "config": [ 00:04:11.372 { 00:04:11.372 "method": "bdev_set_options", 00:04:11.372 "params": { 00:04:11.372 "bdev_auto_examine": true, 00:04:11.372 "bdev_io_cache_size": 256, 00:04:11.372 "bdev_io_pool_size": 65535, 00:04:11.372 "iobuf_large_cache_size": 16, 00:04:11.372 "iobuf_small_cache_size": 128 00:04:11.372 } 00:04:11.372 }, 00:04:11.372 { 00:04:11.372 "method": "bdev_raid_set_options", 00:04:11.372 "params": { 00:04:11.372 "process_max_bandwidth_mb_sec": 0, 00:04:11.372 "process_window_size_kb": 1024 00:04:11.372 } 00:04:11.372 }, 00:04:11.372 { 00:04:11.372 "method": "bdev_iscsi_set_options", 00:04:11.372 "params": { 00:04:11.372 "timeout_sec": 30 00:04:11.372 } 00:04:11.372 }, 00:04:11.372 { 00:04:11.372 "method": "bdev_nvme_set_options", 00:04:11.372 "params": { 00:04:11.372 "action_on_timeout": "none", 00:04:11.372 "allow_accel_sequence": false, 00:04:11.372 "arbitration_burst": 0, 00:04:11.373 "bdev_retry_count": 3, 00:04:11.373 "ctrlr_loss_timeout_sec": 0, 00:04:11.373 "delay_cmd_submit": true, 00:04:11.373 "dhchap_dhgroups": [ 00:04:11.373 "null", 00:04:11.373 "ffdhe2048", 00:04:11.373 "ffdhe3072", 00:04:11.373 "ffdhe4096", 00:04:11.373 "ffdhe6144", 00:04:11.373 "ffdhe8192" 00:04:11.373 ], 00:04:11.373 "dhchap_digests": [ 00:04:11.373 "sha256", 00:04:11.373 "sha384", 00:04:11.373 "sha512" 00:04:11.373 ], 00:04:11.373 "disable_auto_failback": false, 00:04:11.373 "fast_io_fail_timeout_sec": 0, 00:04:11.373 "generate_uuids": false, 00:04:11.373 "high_priority_weight": 0, 00:04:11.373 "io_path_stat": false, 00:04:11.373 "io_queue_requests": 0, 00:04:11.373 "keep_alive_timeout_ms": 10000, 00:04:11.373 "low_priority_weight": 0, 00:04:11.373 "medium_priority_weight": 0, 00:04:11.373 "nvme_adminq_poll_period_us": 10000, 00:04:11.373 "nvme_error_stat": false, 00:04:11.373 "nvme_ioq_poll_period_us": 0, 00:04:11.373 "rdma_cm_event_timeout_ms": 0, 00:04:11.373 "rdma_max_cq_size": 0, 00:04:11.373 "rdma_srq_size": 0, 00:04:11.373 "reconnect_delay_sec": 0, 00:04:11.373 "timeout_admin_us": 0, 00:04:11.373 "timeout_us": 0, 00:04:11.373 "transport_ack_timeout": 0, 00:04:11.373 "transport_retry_count": 4, 00:04:11.373 "transport_tos": 0 00:04:11.373 } 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "method": "bdev_nvme_set_hotplug", 00:04:11.373 "params": { 00:04:11.373 "enable": false, 00:04:11.373 "period_us": 100000 00:04:11.373 } 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "method": "bdev_wait_for_examine" 00:04:11.373 } 00:04:11.373 ] 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "subsystem": "scsi", 00:04:11.373 "config": null 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "subsystem": "scheduler", 00:04:11.373 "config": [ 00:04:11.373 { 00:04:11.373 "method": "framework_set_scheduler", 00:04:11.373 "params": { 00:04:11.373 "name": "static" 00:04:11.373 } 00:04:11.373 } 00:04:11.373 ] 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "subsystem": "vhost_scsi", 00:04:11.373 "config": [] 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "subsystem": "vhost_blk", 00:04:11.373 "config": [] 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "subsystem": "ublk", 00:04:11.373 "config": [] 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "subsystem": "nbd", 00:04:11.373 "config": [] 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "subsystem": "nvmf", 00:04:11.373 "config": [ 00:04:11.373 { 00:04:11.373 "method": "nvmf_set_config", 00:04:11.373 "params": { 00:04:11.373 "admin_cmd_passthru": { 00:04:11.373 "identify_ctrlr": false 00:04:11.373 }, 00:04:11.373 "dhchap_dhgroups": [ 00:04:11.373 "null", 00:04:11.373 "ffdhe2048", 00:04:11.373 "ffdhe3072", 00:04:11.373 "ffdhe4096", 00:04:11.373 "ffdhe6144", 00:04:11.373 "ffdhe8192" 00:04:11.373 ], 00:04:11.373 "dhchap_digests": [ 00:04:11.373 "sha256", 00:04:11.373 "sha384", 00:04:11.373 "sha512" 00:04:11.373 ], 00:04:11.373 "discovery_filter": "match_any" 00:04:11.373 } 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "method": "nvmf_set_max_subsystems", 00:04:11.373 "params": { 00:04:11.373 "max_subsystems": 1024 00:04:11.373 } 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "method": "nvmf_set_crdt", 00:04:11.373 "params": { 00:04:11.373 "crdt1": 0, 00:04:11.373 "crdt2": 0, 00:04:11.373 "crdt3": 0 00:04:11.373 } 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "method": "nvmf_create_transport", 00:04:11.373 "params": { 00:04:11.373 "abort_timeout_sec": 1, 00:04:11.373 "ack_timeout": 0, 00:04:11.373 "buf_cache_size": 4294967295, 00:04:11.373 "c2h_success": true, 00:04:11.373 "data_wr_pool_size": 0, 00:04:11.373 "dif_insert_or_strip": false, 00:04:11.373 "in_capsule_data_size": 4096, 00:04:11.373 "io_unit_size": 131072, 00:04:11.373 "max_aq_depth": 128, 00:04:11.373 "max_io_qpairs_per_ctrlr": 127, 00:04:11.373 "max_io_size": 131072, 00:04:11.373 "max_queue_depth": 128, 00:04:11.373 "num_shared_buffers": 511, 00:04:11.373 "sock_priority": 0, 00:04:11.373 "trtype": "TCP", 00:04:11.373 "zcopy": false 00:04:11.373 } 00:04:11.373 } 00:04:11.373 ] 00:04:11.373 }, 00:04:11.373 { 00:04:11.373 "subsystem": "iscsi", 00:04:11.373 "config": [ 00:04:11.373 { 00:04:11.373 "method": "iscsi_set_options", 00:04:11.373 "params": { 00:04:11.373 "allow_duplicated_isid": false, 00:04:11.373 "chap_group": 0, 00:04:11.373 "data_out_pool_size": 2048, 00:04:11.373 "default_time2retain": 20, 00:04:11.373 "default_time2wait": 2, 00:04:11.373 "disable_chap": false, 00:04:11.373 "error_recovery_level": 0, 00:04:11.373 "first_burst_length": 8192, 00:04:11.373 "immediate_data": true, 00:04:11.373 "immediate_data_pool_size": 16384, 00:04:11.373 "max_connections_per_session": 2, 00:04:11.373 "max_large_datain_per_connection": 64, 00:04:11.373 "max_queue_depth": 64, 00:04:11.373 "max_r2t_per_connection": 4, 00:04:11.373 "max_sessions": 128, 00:04:11.373 "mutual_chap": false, 00:04:11.373 "node_base": "iqn.2016-06.io.spdk", 00:04:11.373 "nop_in_interval": 30, 00:04:11.373 "nop_timeout": 60, 00:04:11.373 "pdu_pool_size": 36864, 00:04:11.373 "require_chap": false 00:04:11.373 } 00:04:11.373 } 00:04:11.373 ] 00:04:11.373 } 00:04:11.373 ] 00:04:11.373 } 00:04:11.373 12:07:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:11.373 12:07:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58774 00:04:11.373 12:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58774 ']' 00:04:11.373 12:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58774 00:04:11.373 12:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:11.373 12:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.373 12:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58774 00:04:11.373 12:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.373 12:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.373 killing process with pid 58774 00:04:11.373 12:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58774' 00:04:11.373 12:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58774 00:04:11.373 12:07:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58774 00:04:11.632 12:07:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58805 00:04:11.632 12:07:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:11.632 12:07:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:16.899 12:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58805 00:04:16.899 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58805 ']' 00:04:16.899 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58805 00:04:16.899 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:16.899 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.899 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58805 00:04:16.899 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.899 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.899 killing process with pid 58805 00:04:16.899 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58805' 00:04:16.900 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58805 00:04:16.900 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58805 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:17.157 00:04:17.157 real 0m6.569s 00:04:17.157 user 0m6.095s 00:04:17.157 sys 0m0.637s 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.157 ************************************ 00:04:17.157 END TEST skip_rpc_with_json 00:04:17.157 ************************************ 00:04:17.157 12:07:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:17.157 12:07:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.157 12:07:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.157 12:07:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.157 ************************************ 00:04:17.157 START TEST skip_rpc_with_delay 00:04:17.157 ************************************ 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:17.157 12:07:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:17.415 [2024-12-05 12:07:48.065072] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:17.415 12:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:17.415 12:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:17.415 12:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:17.415 12:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:17.415 00:04:17.415 real 0m0.096s 00:04:17.415 user 0m0.060s 00:04:17.415 sys 0m0.035s 00:04:17.415 12:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.415 12:07:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:17.416 ************************************ 00:04:17.416 END TEST skip_rpc_with_delay 00:04:17.416 ************************************ 00:04:17.416 12:07:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:17.416 12:07:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:17.416 12:07:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:17.416 12:07:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.416 12:07:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.416 12:07:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.416 ************************************ 00:04:17.416 START TEST exit_on_failed_rpc_init 00:04:17.416 ************************************ 00:04:17.416 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:17.416 12:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58910 00:04:17.416 12:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58910 00:04:17.416 12:07:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:17.416 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58910 ']' 00:04:17.416 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.416 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.416 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.416 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.416 12:07:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.416 [2024-12-05 12:07:48.218335] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:17.416 [2024-12-05 12:07:48.218434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58910 ] 00:04:17.674 [2024-12-05 12:07:48.367131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.674 [2024-12-05 12:07:48.426065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.608 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.608 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:18.608 12:07:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.608 12:07:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.609 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:18.609 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.609 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:18.609 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.609 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:18.609 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.609 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:18.609 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.609 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:18.609 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:18.609 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:18.609 [2024-12-05 12:07:49.295818] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:18.609 [2024-12-05 12:07:49.295929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58945 ] 00:04:18.609 [2024-12-05 12:07:49.442093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.867 [2024-12-05 12:07:49.489741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.867 [2024-12-05 12:07:49.489847] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:18.867 [2024-12-05 12:07:49.489861] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:18.867 [2024-12-05 12:07:49.489868] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58910 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58910 ']' 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58910 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58910 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.867 killing process with pid 58910 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58910' 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58910 00:04:18.867 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58910 00:04:19.125 00:04:19.125 real 0m1.830s 00:04:19.125 user 0m2.087s 00:04:19.125 sys 0m0.451s 00:04:19.125 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.125 12:07:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:19.125 ************************************ 00:04:19.125 END TEST exit_on_failed_rpc_init 00:04:19.125 ************************************ 00:04:19.383 12:07:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.383 00:04:19.383 real 0m14.311s 00:04:19.383 user 0m13.466s 00:04:19.383 sys 0m1.615s 00:04:19.383 12:07:50 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.383 12:07:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.383 ************************************ 00:04:19.383 END TEST skip_rpc 00:04:19.383 ************************************ 00:04:19.383 12:07:50 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:19.383 12:07:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.383 12:07:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.383 12:07:50 -- common/autotest_common.sh@10 -- # set +x 00:04:19.383 ************************************ 00:04:19.383 START TEST rpc_client 00:04:19.383 ************************************ 00:04:19.383 12:07:50 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:19.383 * Looking for test storage... 00:04:19.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:19.383 12:07:50 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.383 12:07:50 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.383 12:07:50 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.383 12:07:50 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.383 12:07:50 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.642 12:07:50 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:19.643 12:07:50 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.643 12:07:50 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:19.643 12:07:50 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:19.643 12:07:50 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.643 12:07:50 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:19.643 12:07:50 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.643 12:07:50 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.643 12:07:50 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.643 12:07:50 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:19.643 12:07:50 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.643 12:07:50 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.643 --rc genhtml_branch_coverage=1 00:04:19.643 --rc genhtml_function_coverage=1 00:04:19.643 --rc genhtml_legend=1 00:04:19.643 --rc geninfo_all_blocks=1 00:04:19.643 --rc geninfo_unexecuted_blocks=1 00:04:19.643 00:04:19.643 ' 00:04:19.643 12:07:50 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.643 --rc genhtml_branch_coverage=1 00:04:19.643 --rc genhtml_function_coverage=1 00:04:19.643 --rc genhtml_legend=1 00:04:19.643 --rc geninfo_all_blocks=1 00:04:19.643 --rc geninfo_unexecuted_blocks=1 00:04:19.643 00:04:19.643 ' 00:04:19.643 12:07:50 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.643 --rc genhtml_branch_coverage=1 00:04:19.643 --rc genhtml_function_coverage=1 00:04:19.643 --rc genhtml_legend=1 00:04:19.643 --rc geninfo_all_blocks=1 00:04:19.643 --rc geninfo_unexecuted_blocks=1 00:04:19.643 00:04:19.643 ' 00:04:19.643 12:07:50 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.643 --rc genhtml_branch_coverage=1 00:04:19.643 --rc genhtml_function_coverage=1 00:04:19.643 --rc genhtml_legend=1 00:04:19.643 --rc geninfo_all_blocks=1 00:04:19.643 --rc geninfo_unexecuted_blocks=1 00:04:19.643 00:04:19.643 ' 00:04:19.643 12:07:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:19.643 OK 00:04:19.643 12:07:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:19.643 00:04:19.643 real 0m0.212s 00:04:19.643 user 0m0.130s 00:04:19.643 sys 0m0.092s 00:04:19.643 12:07:50 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.643 12:07:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:19.643 ************************************ 00:04:19.643 END TEST rpc_client 00:04:19.643 ************************************ 00:04:19.643 12:07:50 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:19.643 12:07:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.643 12:07:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.643 12:07:50 -- common/autotest_common.sh@10 -- # set +x 00:04:19.643 ************************************ 00:04:19.643 START TEST json_config 00:04:19.643 ************************************ 00:04:19.643 12:07:50 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:19.643 12:07:50 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:19.643 12:07:50 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:19.643 12:07:50 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.643 12:07:50 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.643 12:07:50 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.643 12:07:50 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.643 12:07:50 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.643 12:07:50 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.643 12:07:50 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.643 12:07:50 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.643 12:07:50 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.643 12:07:50 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.643 12:07:50 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.643 12:07:50 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.643 12:07:50 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.643 12:07:50 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:19.643 12:07:50 json_config -- scripts/common.sh@345 -- # : 1 00:04:19.643 12:07:50 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.643 12:07:50 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.643 12:07:50 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:19.643 12:07:50 json_config -- scripts/common.sh@353 -- # local d=1 00:04:19.643 12:07:50 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.643 12:07:50 json_config -- scripts/common.sh@355 -- # echo 1 00:04:19.643 12:07:50 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.643 12:07:50 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:19.643 12:07:50 json_config -- scripts/common.sh@353 -- # local d=2 00:04:19.643 12:07:50 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.643 12:07:50 json_config -- scripts/common.sh@355 -- # echo 2 00:04:19.643 12:07:50 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.643 12:07:50 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.643 12:07:50 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.643 12:07:50 json_config -- scripts/common.sh@368 -- # return 0 00:04:19.643 12:07:50 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.643 12:07:50 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.643 --rc genhtml_branch_coverage=1 00:04:19.643 --rc genhtml_function_coverage=1 00:04:19.643 --rc genhtml_legend=1 00:04:19.643 --rc geninfo_all_blocks=1 00:04:19.643 --rc geninfo_unexecuted_blocks=1 00:04:19.643 00:04:19.643 ' 00:04:19.643 12:07:50 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.643 --rc genhtml_branch_coverage=1 00:04:19.643 --rc genhtml_function_coverage=1 00:04:19.643 --rc genhtml_legend=1 00:04:19.643 --rc geninfo_all_blocks=1 00:04:19.643 --rc geninfo_unexecuted_blocks=1 00:04:19.643 00:04:19.643 ' 00:04:19.643 12:07:50 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.643 --rc genhtml_branch_coverage=1 00:04:19.643 --rc genhtml_function_coverage=1 00:04:19.643 --rc genhtml_legend=1 00:04:19.643 --rc geninfo_all_blocks=1 00:04:19.643 --rc geninfo_unexecuted_blocks=1 00:04:19.643 00:04:19.643 ' 00:04:19.643 12:07:50 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.643 --rc genhtml_branch_coverage=1 00:04:19.643 --rc genhtml_function_coverage=1 00:04:19.643 --rc genhtml_legend=1 00:04:19.643 --rc geninfo_all_blocks=1 00:04:19.643 --rc geninfo_unexecuted_blocks=1 00:04:19.643 00:04:19.643 ' 00:04:19.643 12:07:50 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:19.643 12:07:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:19.643 12:07:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.643 12:07:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.643 12:07:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.643 12:07:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.643 12:07:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.643 12:07:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.643 12:07:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.643 12:07:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.643 12:07:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.643 12:07:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:19.903 12:07:50 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:19.903 12:07:50 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.903 12:07:50 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.903 12:07:50 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.903 12:07:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.903 12:07:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.903 12:07:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.903 12:07:50 json_config -- paths/export.sh@5 -- # export PATH 00:04:19.903 12:07:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@51 -- # : 0 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:19.903 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:19.903 12:07:50 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:19.903 12:07:50 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:19.904 12:07:50 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:19.904 INFO: JSON configuration test init 00:04:19.904 12:07:50 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:19.904 12:07:50 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:19.904 12:07:50 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:19.904 12:07:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.904 12:07:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.904 12:07:50 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:19.904 12:07:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.904 12:07:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.904 12:07:50 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:19.904 12:07:50 json_config -- json_config/common.sh@9 -- # local app=target 00:04:19.904 12:07:50 json_config -- json_config/common.sh@10 -- # shift 00:04:19.904 12:07:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:19.904 12:07:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:19.904 12:07:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:19.904 12:07:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.904 12:07:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:19.904 12:07:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59079 00:04:19.904 Waiting for target to run... 00:04:19.904 12:07:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:19.904 12:07:50 json_config -- json_config/common.sh@25 -- # waitforlisten 59079 /var/tmp/spdk_tgt.sock 00:04:19.904 12:07:50 json_config -- common/autotest_common.sh@835 -- # '[' -z 59079 ']' 00:04:19.904 12:07:50 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:19.904 12:07:50 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.904 12:07:50 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.904 12:07:50 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.904 12:07:50 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.904 12:07:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.904 [2024-12-05 12:07:50.596070] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:19.904 [2024-12-05 12:07:50.596179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59079 ] 00:04:20.472 [2024-12-05 12:07:51.036742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.472 [2024-12-05 12:07:51.080739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.041 12:07:51 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.041 12:07:51 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:21.041 12:07:51 json_config -- json_config/common.sh@26 -- # echo '' 00:04:21.041 00:04:21.041 12:07:51 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:21.041 12:07:51 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:21.041 12:07:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.041 12:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.041 12:07:51 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:21.041 12:07:51 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:21.041 12:07:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.041 12:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.041 12:07:51 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:21.041 12:07:51 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:21.041 12:07:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:21.610 12:07:52 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:21.610 12:07:52 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:21.610 12:07:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.610 12:07:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.610 12:07:52 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:21.610 12:07:52 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:21.610 12:07:52 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:21.610 12:07:52 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:21.610 12:07:52 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:21.610 12:07:52 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:21.610 12:07:52 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:21.610 12:07:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@54 -- # sort 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:21.869 12:07:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.869 12:07:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:21.869 12:07:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.869 12:07:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:21.869 12:07:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:21.869 12:07:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:22.154 MallocForNvmf0 00:04:22.154 12:07:52 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:22.154 12:07:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:22.412 MallocForNvmf1 00:04:22.412 12:07:53 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:22.412 12:07:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:22.670 [2024-12-05 12:07:53.382351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:22.670 12:07:53 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:22.670 12:07:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:22.929 12:07:53 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:22.929 12:07:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:23.187 12:07:53 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:23.187 12:07:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:23.452 12:07:54 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:23.452 12:07:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:23.723 [2024-12-05 12:07:54.334905] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:23.723 12:07:54 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:23.723 12:07:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.723 12:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.723 12:07:54 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:23.723 12:07:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.723 12:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.723 12:07:54 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:23.723 12:07:54 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:23.723 12:07:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:23.988 MallocBdevForConfigChangeCheck 00:04:23.989 12:07:54 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:23.989 12:07:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.989 12:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.989 12:07:54 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:23.989 12:07:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.555 INFO: shutting down applications... 00:04:24.555 12:07:55 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:24.555 12:07:55 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:24.555 12:07:55 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:24.555 12:07:55 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:24.555 12:07:55 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:24.813 Calling clear_iscsi_subsystem 00:04:24.813 Calling clear_nvmf_subsystem 00:04:24.813 Calling clear_nbd_subsystem 00:04:24.813 Calling clear_ublk_subsystem 00:04:24.813 Calling clear_vhost_blk_subsystem 00:04:24.813 Calling clear_vhost_scsi_subsystem 00:04:24.813 Calling clear_bdev_subsystem 00:04:24.813 12:07:55 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:24.813 12:07:55 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:24.813 12:07:55 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:24.813 12:07:55 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.813 12:07:55 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:24.813 12:07:55 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:25.071 12:07:55 json_config -- json_config/json_config.sh@352 -- # break 00:04:25.071 12:07:55 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:25.071 12:07:55 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:25.071 12:07:55 json_config -- json_config/common.sh@31 -- # local app=target 00:04:25.071 12:07:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:25.071 12:07:55 json_config -- json_config/common.sh@35 -- # [[ -n 59079 ]] 00:04:25.071 12:07:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59079 00:04:25.071 12:07:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:25.071 12:07:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.071 12:07:55 json_config -- json_config/common.sh@41 -- # kill -0 59079 00:04:25.071 12:07:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.639 12:07:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.639 12:07:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.639 12:07:56 json_config -- json_config/common.sh@41 -- # kill -0 59079 00:04:25.639 12:07:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:25.639 12:07:56 json_config -- json_config/common.sh@43 -- # break 00:04:25.639 12:07:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:25.639 SPDK target shutdown done 00:04:25.639 12:07:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:25.639 INFO: relaunching applications... 00:04:25.639 12:07:56 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:25.639 12:07:56 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:25.639 12:07:56 json_config -- json_config/common.sh@9 -- # local app=target 00:04:25.639 12:07:56 json_config -- json_config/common.sh@10 -- # shift 00:04:25.639 12:07:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:25.639 12:07:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:25.639 12:07:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:25.639 12:07:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.639 12:07:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.639 12:07:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59364 00:04:25.639 12:07:56 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:25.639 12:07:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:25.639 Waiting for target to run... 00:04:25.639 12:07:56 json_config -- json_config/common.sh@25 -- # waitforlisten 59364 /var/tmp/spdk_tgt.sock 00:04:25.639 12:07:56 json_config -- common/autotest_common.sh@835 -- # '[' -z 59364 ']' 00:04:25.639 12:07:56 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:25.639 12:07:56 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:25.639 12:07:56 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:25.639 12:07:56 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.639 12:07:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.639 [2024-12-05 12:07:56.439220] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:25.639 [2024-12-05 12:07:56.439350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59364 ] 00:04:26.208 [2024-12-05 12:07:56.865577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.208 [2024-12-05 12:07:56.902064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.468 [2024-12-05 12:07:57.243263] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.468 [2024-12-05 12:07:57.275362] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:26.727 12:07:57 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.727 00:04:26.727 12:07:57 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:26.727 12:07:57 json_config -- json_config/common.sh@26 -- # echo '' 00:04:26.727 12:07:57 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:26.727 INFO: Checking if target configuration is the same... 00:04:26.727 12:07:57 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:26.727 12:07:57 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:26.727 12:07:57 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:26.727 12:07:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.727 + '[' 2 -ne 2 ']' 00:04:26.727 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:26.727 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:26.727 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:26.727 +++ basename /dev/fd/62 00:04:26.727 ++ mktemp /tmp/62.XXX 00:04:26.727 + tmp_file_1=/tmp/62.voa 00:04:26.727 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:26.727 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:26.727 + tmp_file_2=/tmp/spdk_tgt_config.json.ZDm 00:04:26.727 + ret=0 00:04:26.727 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:26.986 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:27.245 + diff -u /tmp/62.voa /tmp/spdk_tgt_config.json.ZDm 00:04:27.245 INFO: JSON config files are the same 00:04:27.245 + echo 'INFO: JSON config files are the same' 00:04:27.245 + rm /tmp/62.voa /tmp/spdk_tgt_config.json.ZDm 00:04:27.245 + exit 0 00:04:27.245 12:07:57 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:27.245 12:07:57 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:27.245 INFO: changing configuration and checking if this can be detected... 00:04:27.245 12:07:57 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:27.245 12:07:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:27.504 12:07:58 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:27.504 12:07:58 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:27.504 12:07:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:27.504 + '[' 2 -ne 2 ']' 00:04:27.504 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:27.504 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:27.504 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:27.504 +++ basename /dev/fd/62 00:04:27.504 ++ mktemp /tmp/62.XXX 00:04:27.504 + tmp_file_1=/tmp/62.YU9 00:04:27.504 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:27.504 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:27.504 + tmp_file_2=/tmp/spdk_tgt_config.json.tZH 00:04:27.504 + ret=0 00:04:27.504 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:27.762 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:28.021 + diff -u /tmp/62.YU9 /tmp/spdk_tgt_config.json.tZH 00:04:28.021 + ret=1 00:04:28.021 + echo '=== Start of file: /tmp/62.YU9 ===' 00:04:28.021 + cat /tmp/62.YU9 00:04:28.021 + echo '=== End of file: /tmp/62.YU9 ===' 00:04:28.021 + echo '' 00:04:28.021 + echo '=== Start of file: /tmp/spdk_tgt_config.json.tZH ===' 00:04:28.021 + cat /tmp/spdk_tgt_config.json.tZH 00:04:28.021 + echo '=== End of file: /tmp/spdk_tgt_config.json.tZH ===' 00:04:28.021 + echo '' 00:04:28.021 + rm /tmp/62.YU9 /tmp/spdk_tgt_config.json.tZH 00:04:28.021 + exit 1 00:04:28.021 INFO: configuration change detected. 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@324 -- # [[ -n 59364 ]] 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.021 12:07:58 json_config -- json_config/json_config.sh@330 -- # killprocess 59364 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@954 -- # '[' -z 59364 ']' 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@958 -- # kill -0 59364 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@959 -- # uname 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59364 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.021 killing process with pid 59364 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59364' 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@973 -- # kill 59364 00:04:28.021 12:07:58 json_config -- common/autotest_common.sh@978 -- # wait 59364 00:04:28.280 12:07:58 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:28.280 12:07:58 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:28.280 12:07:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.280 12:07:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.280 12:07:59 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:28.280 INFO: Success 00:04:28.280 12:07:59 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:28.280 00:04:28.280 real 0m8.670s 00:04:28.280 user 0m12.315s 00:04:28.280 sys 0m1.881s 00:04:28.280 12:07:59 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.280 12:07:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.280 ************************************ 00:04:28.280 END TEST json_config 00:04:28.280 ************************************ 00:04:28.280 12:07:59 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:28.280 12:07:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.280 12:07:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.280 12:07:59 -- common/autotest_common.sh@10 -- # set +x 00:04:28.280 ************************************ 00:04:28.280 START TEST json_config_extra_key 00:04:28.280 ************************************ 00:04:28.280 12:07:59 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:28.280 12:07:59 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.280 12:07:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.280 12:07:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.539 12:07:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.539 12:07:59 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:28.539 12:07:59 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.539 12:07:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.539 --rc genhtml_branch_coverage=1 00:04:28.539 --rc genhtml_function_coverage=1 00:04:28.539 --rc genhtml_legend=1 00:04:28.539 --rc geninfo_all_blocks=1 00:04:28.539 --rc geninfo_unexecuted_blocks=1 00:04:28.539 00:04:28.539 ' 00:04:28.539 12:07:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.539 --rc genhtml_branch_coverage=1 00:04:28.539 --rc genhtml_function_coverage=1 00:04:28.539 --rc genhtml_legend=1 00:04:28.539 --rc geninfo_all_blocks=1 00:04:28.539 --rc geninfo_unexecuted_blocks=1 00:04:28.539 00:04:28.539 ' 00:04:28.539 12:07:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.539 --rc genhtml_branch_coverage=1 00:04:28.539 --rc genhtml_function_coverage=1 00:04:28.539 --rc genhtml_legend=1 00:04:28.539 --rc geninfo_all_blocks=1 00:04:28.539 --rc geninfo_unexecuted_blocks=1 00:04:28.539 00:04:28.539 ' 00:04:28.539 12:07:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.539 --rc genhtml_branch_coverage=1 00:04:28.539 --rc genhtml_function_coverage=1 00:04:28.539 --rc genhtml_legend=1 00:04:28.539 --rc geninfo_all_blocks=1 00:04:28.539 --rc geninfo_unexecuted_blocks=1 00:04:28.539 00:04:28.539 ' 00:04:28.539 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:28.539 12:07:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:28.539 12:07:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.539 12:07:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.539 12:07:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.539 12:07:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.539 12:07:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.539 12:07:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:28.540 12:07:59 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:28.540 12:07:59 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.540 12:07:59 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.540 12:07:59 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.540 12:07:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.540 12:07:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.540 12:07:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.540 12:07:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:28.540 12:07:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:28.540 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:28.540 12:07:59 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:28.540 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:28.540 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:28.540 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:28.540 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:28.540 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:28.540 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:28.540 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:28.540 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:28.540 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:28.540 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.540 INFO: launching applications... 00:04:28.540 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:28.540 12:07:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:28.540 12:07:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:28.540 12:07:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:28.540 12:07:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.540 12:07:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.540 12:07:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.540 12:07:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.540 12:07:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.540 12:07:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59542 00:04:28.540 Waiting for target to run... 00:04:28.540 12:07:59 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:28.540 12:07:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.540 12:07:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59542 /var/tmp/spdk_tgt.sock 00:04:28.540 12:07:59 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59542 ']' 00:04:28.540 12:07:59 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.540 12:07:59 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.540 12:07:59 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.540 12:07:59 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.540 12:07:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:28.540 [2024-12-05 12:07:59.316046] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:28.540 [2024-12-05 12:07:59.316154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59542 ] 00:04:29.107 [2024-12-05 12:07:59.772901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.107 [2024-12-05 12:07:59.815301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.675 12:08:00 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.675 12:08:00 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:29.675 00:04:29.675 12:08:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:29.675 INFO: shutting down applications... 00:04:29.675 12:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:29.675 12:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:29.675 12:08:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:29.675 12:08:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.675 12:08:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59542 ]] 00:04:29.675 12:08:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59542 00:04:29.675 12:08:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.675 12:08:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.675 12:08:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59542 00:04:29.675 12:08:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.242 12:08:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.242 12:08:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.242 12:08:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59542 00:04:30.242 12:08:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.242 12:08:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:30.242 12:08:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.242 SPDK target shutdown done 00:04:30.242 12:08:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.242 Success 00:04:30.242 12:08:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:30.242 00:04:30.242 real 0m1.766s 00:04:30.242 user 0m1.638s 00:04:30.242 sys 0m0.482s 00:04:30.242 12:08:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.242 ************************************ 00:04:30.242 END TEST json_config_extra_key 00:04:30.242 12:08:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:30.242 ************************************ 00:04:30.242 12:08:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.242 12:08:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.242 12:08:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.242 12:08:00 -- common/autotest_common.sh@10 -- # set +x 00:04:30.242 ************************************ 00:04:30.242 START TEST alias_rpc 00:04:30.242 ************************************ 00:04:30.242 12:08:00 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.242 * Looking for test storage... 00:04:30.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:30.242 12:08:00 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.242 12:08:00 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.242 12:08:00 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.242 12:08:01 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.242 12:08:01 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:30.242 12:08:01 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.242 12:08:01 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.242 --rc genhtml_branch_coverage=1 00:04:30.242 --rc genhtml_function_coverage=1 00:04:30.242 --rc genhtml_legend=1 00:04:30.242 --rc geninfo_all_blocks=1 00:04:30.242 --rc geninfo_unexecuted_blocks=1 00:04:30.242 00:04:30.242 ' 00:04:30.242 12:08:01 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.242 --rc genhtml_branch_coverage=1 00:04:30.242 --rc genhtml_function_coverage=1 00:04:30.242 --rc genhtml_legend=1 00:04:30.242 --rc geninfo_all_blocks=1 00:04:30.242 --rc geninfo_unexecuted_blocks=1 00:04:30.242 00:04:30.242 ' 00:04:30.242 12:08:01 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.242 --rc genhtml_branch_coverage=1 00:04:30.242 --rc genhtml_function_coverage=1 00:04:30.243 --rc genhtml_legend=1 00:04:30.243 --rc geninfo_all_blocks=1 00:04:30.243 --rc geninfo_unexecuted_blocks=1 00:04:30.243 00:04:30.243 ' 00:04:30.243 12:08:01 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.243 --rc genhtml_branch_coverage=1 00:04:30.243 --rc genhtml_function_coverage=1 00:04:30.243 --rc genhtml_legend=1 00:04:30.243 --rc geninfo_all_blocks=1 00:04:30.243 --rc geninfo_unexecuted_blocks=1 00:04:30.243 00:04:30.243 ' 00:04:30.243 12:08:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:30.243 12:08:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59627 00:04:30.243 12:08:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59627 00:04:30.243 12:08:01 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59627 ']' 00:04:30.243 12:08:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:30.243 12:08:01 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.243 12:08:01 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.243 12:08:01 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.243 12:08:01 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.243 12:08:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.501 [2024-12-05 12:08:01.130478] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:30.501 [2024-12-05 12:08:01.130558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59627 ] 00:04:30.501 [2024-12-05 12:08:01.270338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.501 [2024-12-05 12:08:01.318267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.759 12:08:01 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.759 12:08:01 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.759 12:08:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:31.326 12:08:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59627 00:04:31.326 12:08:01 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59627 ']' 00:04:31.326 12:08:01 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59627 00:04:31.326 12:08:01 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.326 12:08:01 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.326 12:08:01 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59627 00:04:31.326 12:08:01 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.326 12:08:01 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.326 killing process with pid 59627 00:04:31.326 12:08:01 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59627' 00:04:31.326 12:08:01 alias_rpc -- common/autotest_common.sh@973 -- # kill 59627 00:04:31.326 12:08:01 alias_rpc -- common/autotest_common.sh@978 -- # wait 59627 00:04:31.585 00:04:31.585 real 0m1.446s 00:04:31.585 user 0m1.528s 00:04:31.585 sys 0m0.448s 00:04:31.585 12:08:02 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.585 12:08:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.585 ************************************ 00:04:31.585 END TEST alias_rpc 00:04:31.585 ************************************ 00:04:31.585 12:08:02 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:04:31.585 12:08:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:31.585 12:08:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.585 12:08:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.585 12:08:02 -- common/autotest_common.sh@10 -- # set +x 00:04:31.585 ************************************ 00:04:31.585 START TEST dpdk_mem_utility 00:04:31.585 ************************************ 00:04:31.585 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:31.844 * Looking for test storage... 00:04:31.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.844 12:08:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.844 --rc genhtml_branch_coverage=1 00:04:31.844 --rc genhtml_function_coverage=1 00:04:31.844 --rc genhtml_legend=1 00:04:31.844 --rc geninfo_all_blocks=1 00:04:31.844 --rc geninfo_unexecuted_blocks=1 00:04:31.844 00:04:31.844 ' 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.844 --rc genhtml_branch_coverage=1 00:04:31.844 --rc genhtml_function_coverage=1 00:04:31.844 --rc genhtml_legend=1 00:04:31.844 --rc geninfo_all_blocks=1 00:04:31.844 --rc geninfo_unexecuted_blocks=1 00:04:31.844 00:04:31.844 ' 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.844 --rc genhtml_branch_coverage=1 00:04:31.844 --rc genhtml_function_coverage=1 00:04:31.844 --rc genhtml_legend=1 00:04:31.844 --rc geninfo_all_blocks=1 00:04:31.844 --rc geninfo_unexecuted_blocks=1 00:04:31.844 00:04:31.844 ' 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.844 --rc genhtml_branch_coverage=1 00:04:31.844 --rc genhtml_function_coverage=1 00:04:31.844 --rc genhtml_legend=1 00:04:31.844 --rc geninfo_all_blocks=1 00:04:31.844 --rc geninfo_unexecuted_blocks=1 00:04:31.844 00:04:31.844 ' 00:04:31.844 12:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:31.844 12:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59719 00:04:31.844 12:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59719 00:04:31.844 12:08:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59719 ']' 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.844 12:08:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:31.844 [2024-12-05 12:08:02.645770] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:31.844 [2024-12-05 12:08:02.645894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59719 ] 00:04:32.102 [2024-12-05 12:08:02.787864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.103 [2024-12-05 12:08:02.839215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.370 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.370 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:32.370 12:08:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:32.370 12:08:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:32.370 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.370 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.370 { 00:04:32.370 "filename": "/tmp/spdk_mem_dump.txt" 00:04:32.370 } 00:04:32.370 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.370 12:08:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:32.370 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:32.370 1 heaps totaling size 818.000000 MiB 00:04:32.370 size: 818.000000 MiB heap id: 0 00:04:32.370 end heaps---------- 00:04:32.370 9 mempools totaling size 603.782043 MiB 00:04:32.370 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:32.370 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:32.370 size: 100.555481 MiB name: bdev_io_59719 00:04:32.370 size: 50.003479 MiB name: msgpool_59719 00:04:32.370 size: 36.509338 MiB name: fsdev_io_59719 00:04:32.370 size: 21.763794 MiB name: PDU_Pool 00:04:32.370 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:32.370 size: 4.133484 MiB name: evtpool_59719 00:04:32.370 size: 0.026123 MiB name: Session_Pool 00:04:32.370 end mempools------- 00:04:32.370 6 memzones totaling size 4.142822 MiB 00:04:32.370 size: 1.000366 MiB name: RG_ring_0_59719 00:04:32.370 size: 1.000366 MiB name: RG_ring_1_59719 00:04:32.370 size: 1.000366 MiB name: RG_ring_4_59719 00:04:32.370 size: 1.000366 MiB name: RG_ring_5_59719 00:04:32.370 size: 0.125366 MiB name: RG_ring_2_59719 00:04:32.370 size: 0.015991 MiB name: RG_ring_3_59719 00:04:32.370 end memzones------- 00:04:32.370 12:08:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:32.650 heap id: 0 total size: 818.000000 MiB number of busy elements: 231 number of free elements: 15 00:04:32.650 list of free elements. size: 10.818237 MiB 00:04:32.650 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:32.650 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:32.650 element at address: 0x200000400000 with size: 0.996155 MiB 00:04:32.650 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:32.650 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:32.650 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:32.650 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:32.650 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:32.650 element at address: 0x20001ae00000 with size: 0.572632 MiB 00:04:32.650 element at address: 0x200000c00000 with size: 0.490662 MiB 00:04:32.650 element at address: 0x20000a600000 with size: 0.489807 MiB 00:04:32.650 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:32.650 element at address: 0x200003e00000 with size: 0.481201 MiB 00:04:32.650 element at address: 0x200028200000 with size: 0.396484 MiB 00:04:32.650 element at address: 0x200000800000 with size: 0.353394 MiB 00:04:32.650 list of standard malloc elements. size: 199.252869 MiB 00:04:32.650 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:32.650 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:32.650 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:32.650 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:32.650 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:32.650 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:32.650 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:32.650 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:32.650 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:32.650 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:32.650 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000085a780 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000085a980 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:32.651 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:32.651 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:32.651 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:32.651 element at address: 0x200028265800 with size: 0.000183 MiB 00:04:32.651 element at address: 0x2000282658c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826c4c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826c780 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826c840 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826c900 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826d080 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826d140 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826d200 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:04:32.651 element at address: 0x20002826d380 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826d440 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826d500 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826d680 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826d740 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826d800 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826d980 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826da40 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826db00 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826de00 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826df80 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e040 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e100 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e280 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e340 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e400 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e580 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e640 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e700 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e880 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826e940 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f000 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f180 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f240 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f300 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f480 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f540 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f600 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f780 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f840 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f900 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:32.652 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:32.652 list of memzone associated elements. size: 607.928894 MiB 00:04:32.652 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:32.652 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:32.652 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:32.652 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:32.652 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:32.652 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59719_0 00:04:32.652 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:32.652 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59719_0 00:04:32.652 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:32.652 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59719_0 00:04:32.652 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:32.652 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:32.652 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:32.652 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:32.652 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:32.652 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59719_0 00:04:32.652 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:32.652 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59719 00:04:32.652 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:32.652 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59719 00:04:32.652 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:32.652 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:32.652 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:32.652 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:32.652 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:32.652 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:32.652 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:32.652 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:32.652 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:32.652 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59719 00:04:32.652 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:32.652 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59719 00:04:32.652 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:32.652 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59719 00:04:32.652 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:32.652 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59719 00:04:32.652 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:32.652 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59719 00:04:32.652 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:32.652 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59719 00:04:32.652 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:32.652 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:32.652 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:32.652 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:32.652 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:32.652 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:32.652 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:32.652 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59719 00:04:32.652 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:04:32.652 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59719 00:04:32.652 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:32.652 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:32.652 element at address: 0x200028265980 with size: 0.023743 MiB 00:04:32.652 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:32.652 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:04:32.652 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59719 00:04:32.652 element at address: 0x20002826bac0 with size: 0.002441 MiB 00:04:32.652 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:32.652 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:32.652 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59719 00:04:32.652 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:32.652 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59719 00:04:32.652 element at address: 0x20000085a840 with size: 0.000305 MiB 00:04:32.652 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59719 00:04:32.652 element at address: 0x20002826c580 with size: 0.000305 MiB 00:04:32.652 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:32.652 12:08:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:32.652 12:08:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59719 00:04:32.652 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59719 ']' 00:04:32.652 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59719 00:04:32.652 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:32.652 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.652 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59719 00:04:32.652 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.652 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.652 killing process with pid 59719 00:04:32.652 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59719' 00:04:32.653 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59719 00:04:32.653 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59719 00:04:32.910 00:04:32.910 real 0m1.298s 00:04:32.910 user 0m1.250s 00:04:32.910 sys 0m0.422s 00:04:32.910 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.910 12:08:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:32.910 ************************************ 00:04:32.910 END TEST dpdk_mem_utility 00:04:32.910 ************************************ 00:04:32.910 12:08:03 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:32.910 12:08:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.910 12:08:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.910 12:08:03 -- common/autotest_common.sh@10 -- # set +x 00:04:32.910 ************************************ 00:04:32.910 START TEST event 00:04:32.910 ************************************ 00:04:32.910 12:08:03 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:33.167 * Looking for test storage... 00:04:33.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:33.167 12:08:03 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.167 12:08:03 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.167 12:08:03 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.167 12:08:03 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.167 12:08:03 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.167 12:08:03 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.168 12:08:03 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.168 12:08:03 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.168 12:08:03 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.168 12:08:03 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.168 12:08:03 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.168 12:08:03 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.168 12:08:03 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.168 12:08:03 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.168 12:08:03 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.168 12:08:03 event -- scripts/common.sh@344 -- # case "$op" in 00:04:33.168 12:08:03 event -- scripts/common.sh@345 -- # : 1 00:04:33.168 12:08:03 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.168 12:08:03 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.168 12:08:03 event -- scripts/common.sh@365 -- # decimal 1 00:04:33.168 12:08:03 event -- scripts/common.sh@353 -- # local d=1 00:04:33.168 12:08:03 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.168 12:08:03 event -- scripts/common.sh@355 -- # echo 1 00:04:33.168 12:08:03 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.168 12:08:03 event -- scripts/common.sh@366 -- # decimal 2 00:04:33.168 12:08:03 event -- scripts/common.sh@353 -- # local d=2 00:04:33.168 12:08:03 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.168 12:08:03 event -- scripts/common.sh@355 -- # echo 2 00:04:33.168 12:08:03 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.168 12:08:03 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.168 12:08:03 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.168 12:08:03 event -- scripts/common.sh@368 -- # return 0 00:04:33.168 12:08:03 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.168 12:08:03 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.168 --rc genhtml_branch_coverage=1 00:04:33.168 --rc genhtml_function_coverage=1 00:04:33.168 --rc genhtml_legend=1 00:04:33.168 --rc geninfo_all_blocks=1 00:04:33.168 --rc geninfo_unexecuted_blocks=1 00:04:33.168 00:04:33.168 ' 00:04:33.168 12:08:03 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.168 --rc genhtml_branch_coverage=1 00:04:33.168 --rc genhtml_function_coverage=1 00:04:33.168 --rc genhtml_legend=1 00:04:33.168 --rc geninfo_all_blocks=1 00:04:33.168 --rc geninfo_unexecuted_blocks=1 00:04:33.168 00:04:33.168 ' 00:04:33.168 12:08:03 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.168 --rc genhtml_branch_coverage=1 00:04:33.168 --rc genhtml_function_coverage=1 00:04:33.168 --rc genhtml_legend=1 00:04:33.168 --rc geninfo_all_blocks=1 00:04:33.168 --rc geninfo_unexecuted_blocks=1 00:04:33.168 00:04:33.168 ' 00:04:33.168 12:08:03 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.168 --rc genhtml_branch_coverage=1 00:04:33.168 --rc genhtml_function_coverage=1 00:04:33.168 --rc genhtml_legend=1 00:04:33.168 --rc geninfo_all_blocks=1 00:04:33.168 --rc geninfo_unexecuted_blocks=1 00:04:33.168 00:04:33.168 ' 00:04:33.168 12:08:03 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:33.168 12:08:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:33.168 12:08:03 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:33.168 12:08:03 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:33.168 12:08:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.168 12:08:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.168 ************************************ 00:04:33.168 START TEST event_perf 00:04:33.168 ************************************ 00:04:33.168 12:08:03 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:33.168 Running I/O for 1 seconds...[2024-12-05 12:08:03.949598] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:33.168 [2024-12-05 12:08:03.949716] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59803 ] 00:04:33.425 [2024-12-05 12:08:04.094127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.425 [2024-12-05 12:08:04.147422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.425 [2024-12-05 12:08:04.147556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:33.425 Running I/O for 1 seconds...[2024-12-05 12:08:04.147697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.425 [2024-12-05 12:08:04.147701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.358 00:04:34.358 lcore 0: 201647 00:04:34.358 lcore 1: 201647 00:04:34.358 lcore 2: 201648 00:04:34.358 lcore 3: 201649 00:04:34.358 done. 00:04:34.358 00:04:34.358 real 0m1.260s 00:04:34.358 user 0m4.080s 00:04:34.358 sys 0m0.053s 00:04:34.358 12:08:05 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.358 12:08:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:34.358 ************************************ 00:04:34.358 END TEST event_perf 00:04:34.358 ************************************ 00:04:34.616 12:08:05 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:34.616 12:08:05 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:34.616 12:08:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.616 12:08:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.616 ************************************ 00:04:34.616 START TEST event_reactor 00:04:34.616 ************************************ 00:04:34.616 12:08:05 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:34.616 [2024-12-05 12:08:05.264187] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:34.616 [2024-12-05 12:08:05.264327] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59836 ] 00:04:34.616 [2024-12-05 12:08:05.409458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.616 [2024-12-05 12:08:05.451569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.989 test_start 00:04:35.989 oneshot 00:04:35.989 tick 100 00:04:35.989 tick 100 00:04:35.989 tick 250 00:04:35.989 tick 100 00:04:35.989 tick 100 00:04:35.989 tick 100 00:04:35.989 tick 250 00:04:35.989 tick 500 00:04:35.989 tick 100 00:04:35.989 tick 100 00:04:35.989 tick 250 00:04:35.989 tick 100 00:04:35.989 tick 100 00:04:35.989 test_end 00:04:35.989 00:04:35.989 real 0m1.248s 00:04:35.989 user 0m1.107s 00:04:35.989 sys 0m0.036s 00:04:35.989 12:08:06 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.989 12:08:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:35.989 ************************************ 00:04:35.989 END TEST event_reactor 00:04:35.989 ************************************ 00:04:35.989 12:08:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.989 12:08:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:35.989 12:08:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.989 12:08:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.989 ************************************ 00:04:35.989 START TEST event_reactor_perf 00:04:35.989 ************************************ 00:04:35.989 12:08:06 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:35.989 [2024-12-05 12:08:06.566108] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:35.989 [2024-12-05 12:08:06.566212] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59871 ] 00:04:35.989 [2024-12-05 12:08:06.706189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.989 [2024-12-05 12:08:06.753148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.926 test_start 00:04:36.926 test_end 00:04:36.926 Performance: 450435 events per second 00:04:36.926 00:04:36.926 real 0m1.242s 00:04:36.926 user 0m1.097s 00:04:36.926 sys 0m0.041s 00:04:36.926 12:08:07 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.185 12:08:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:37.185 ************************************ 00:04:37.185 END TEST event_reactor_perf 00:04:37.185 ************************************ 00:04:37.185 12:08:07 event -- event/event.sh@49 -- # uname -s 00:04:37.185 12:08:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:37.185 12:08:07 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:37.185 12:08:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.185 12:08:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.185 12:08:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.185 ************************************ 00:04:37.185 START TEST event_scheduler 00:04:37.185 ************************************ 00:04:37.185 12:08:07 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:37.185 * Looking for test storage... 00:04:37.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:37.185 12:08:07 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.185 12:08:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.185 12:08:07 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.185 12:08:08 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.185 12:08:08 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:37.185 12:08:08 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.185 12:08:08 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.185 --rc genhtml_branch_coverage=1 00:04:37.185 --rc genhtml_function_coverage=1 00:04:37.185 --rc genhtml_legend=1 00:04:37.185 --rc geninfo_all_blocks=1 00:04:37.185 --rc geninfo_unexecuted_blocks=1 00:04:37.185 00:04:37.185 ' 00:04:37.185 12:08:08 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.185 --rc genhtml_branch_coverage=1 00:04:37.185 --rc genhtml_function_coverage=1 00:04:37.185 --rc genhtml_legend=1 00:04:37.185 --rc geninfo_all_blocks=1 00:04:37.185 --rc geninfo_unexecuted_blocks=1 00:04:37.185 00:04:37.185 ' 00:04:37.185 12:08:08 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.185 --rc genhtml_branch_coverage=1 00:04:37.185 --rc genhtml_function_coverage=1 00:04:37.185 --rc genhtml_legend=1 00:04:37.185 --rc geninfo_all_blocks=1 00:04:37.185 --rc geninfo_unexecuted_blocks=1 00:04:37.185 00:04:37.185 ' 00:04:37.185 12:08:08 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.185 --rc genhtml_branch_coverage=1 00:04:37.185 --rc genhtml_function_coverage=1 00:04:37.185 --rc genhtml_legend=1 00:04:37.185 --rc geninfo_all_blocks=1 00:04:37.185 --rc geninfo_unexecuted_blocks=1 00:04:37.185 00:04:37.185 ' 00:04:37.185 12:08:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:37.185 12:08:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59941 00:04:37.185 12:08:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:37.185 12:08:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.185 12:08:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59941 00:04:37.185 12:08:08 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59941 ']' 00:04:37.185 12:08:08 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.185 12:08:08 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.186 12:08:08 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.186 12:08:08 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.186 12:08:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.444 [2024-12-05 12:08:08.103744] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:37.444 [2024-12-05 12:08:08.103874] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59941 ] 00:04:37.444 [2024-12-05 12:08:08.257326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:37.444 [2024-12-05 12:08:08.313473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.444 [2024-12-05 12:08:08.313548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.444 [2024-12-05 12:08:08.313697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:37.444 [2024-12-05 12:08:08.313703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:37.704 12:08:08 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.704 12:08:08 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:37.704 12:08:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:37.704 12:08:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.704 POWER: Cannot set governor of lcore 0 to userspace 00:04:37.704 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.704 POWER: Cannot set governor of lcore 0 to performance 00:04:37.704 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.704 POWER: Cannot set governor of lcore 0 to userspace 00:04:37.704 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:37.704 POWER: Cannot set governor of lcore 0 to userspace 00:04:37.704 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:37.704 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:37.704 POWER: Unable to set Power Management Environment for lcore 0 00:04:37.704 [2024-12-05 12:08:08.363545] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:37.704 [2024-12-05 12:08:08.363560] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:37.704 [2024-12-05 12:08:08.363571] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:37.704 [2024-12-05 12:08:08.363585] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:37.704 [2024-12-05 12:08:08.363595] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:37.704 [2024-12-05 12:08:08.363604] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:37.704 12:08:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.704 12:08:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:37.704 12:08:08 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 [2024-12-05 12:08:08.466787] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:37.704 12:08:08 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.704 12:08:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:37.704 12:08:08 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.704 12:08:08 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 ************************************ 00:04:37.704 START TEST scheduler_create_thread 00:04:37.704 ************************************ 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 2 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 3 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 4 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 5 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 6 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 7 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 8 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 9 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.704 10 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.704 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.963 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.963 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:37.963 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:37.963 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.963 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.963 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.963 12:08:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:37.963 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.963 12:08:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.341 12:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.341 12:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:39.341 12:08:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:39.341 12:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.341 12:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.279 12:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.279 00:04:40.279 real 0m2.614s 00:04:40.279 user 0m0.020s 00:04:40.279 sys 0m0.007s 00:04:40.279 12:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.279 ************************************ 00:04:40.279 12:08:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.279 END TEST scheduler_create_thread 00:04:40.279 ************************************ 00:04:40.279 12:08:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:40.279 12:08:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59941 00:04:40.279 12:08:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59941 ']' 00:04:40.279 12:08:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59941 00:04:40.279 12:08:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:40.279 12:08:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.279 12:08:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59941 00:04:40.538 12:08:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:40.538 12:08:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:40.538 killing process with pid 59941 00:04:40.538 12:08:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59941' 00:04:40.538 12:08:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59941 00:04:40.538 12:08:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59941 00:04:40.798 [2024-12-05 12:08:11.574688] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:41.057 00:04:41.057 real 0m3.919s 00:04:41.057 user 0m5.740s 00:04:41.057 sys 0m0.359s 00:04:41.057 12:08:11 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.057 12:08:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.057 ************************************ 00:04:41.057 END TEST event_scheduler 00:04:41.057 ************************************ 00:04:41.057 12:08:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:41.057 12:08:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:41.057 12:08:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.057 12:08:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.057 12:08:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.057 ************************************ 00:04:41.057 START TEST app_repeat 00:04:41.057 ************************************ 00:04:41.057 12:08:11 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60045 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:41.057 Process app_repeat pid: 60045 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60045' 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:41.057 spdk_app_start Round 0 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:41.057 12:08:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60045 /var/tmp/spdk-nbd.sock 00:04:41.057 12:08:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60045 ']' 00:04:41.057 12:08:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.057 12:08:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.057 12:08:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.057 12:08:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.057 12:08:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.057 [2024-12-05 12:08:11.861909] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:04:41.057 [2024-12-05 12:08:11.862013] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60045 ] 00:04:41.322 [2024-12-05 12:08:12.009822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.322 [2024-12-05 12:08:12.068325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.322 [2024-12-05 12:08:12.068351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.322 12:08:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.322 12:08:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:41.322 12:08:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:41.581 Malloc0 00:04:41.581 12:08:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.147 Malloc1 00:04:42.147 12:08:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.147 12:08:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:42.406 /dev/nbd0 00:04:42.406 12:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:42.406 12:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.406 1+0 records in 00:04:42.406 1+0 records out 00:04:42.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345413 s, 11.9 MB/s 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:42.406 12:08:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:42.406 12:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.406 12:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.406 12:08:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:42.665 /dev/nbd1 00:04:42.665 12:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:42.665 12:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.665 1+0 records in 00:04:42.665 1+0 records out 00:04:42.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315393 s, 13.0 MB/s 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:42.665 12:08:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:42.665 12:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.665 12:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.665 12:08:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.665 12:08:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.665 12:08:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:42.924 { 00:04:42.924 "bdev_name": "Malloc0", 00:04:42.924 "nbd_device": "/dev/nbd0" 00:04:42.924 }, 00:04:42.924 { 00:04:42.924 "bdev_name": "Malloc1", 00:04:42.924 "nbd_device": "/dev/nbd1" 00:04:42.924 } 00:04:42.924 ]' 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:42.924 { 00:04:42.924 "bdev_name": "Malloc0", 00:04:42.924 "nbd_device": "/dev/nbd0" 00:04:42.924 }, 00:04:42.924 { 00:04:42.924 "bdev_name": "Malloc1", 00:04:42.924 "nbd_device": "/dev/nbd1" 00:04:42.924 } 00:04:42.924 ]' 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:42.924 /dev/nbd1' 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:42.924 /dev/nbd1' 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:42.924 256+0 records in 00:04:42.924 256+0 records out 00:04:42.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00667626 s, 157 MB/s 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:42.924 256+0 records in 00:04:42.924 256+0 records out 00:04:42.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026512 s, 39.6 MB/s 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:42.924 12:08:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:43.183 256+0 records in 00:04:43.183 256+0 records out 00:04:43.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257491 s, 40.7 MB/s 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.183 12:08:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:43.441 12:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:43.441 12:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:43.441 12:08:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:43.441 12:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.441 12:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.441 12:08:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:43.441 12:08:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.441 12:08:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.441 12:08:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.442 12:08:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:43.700 12:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:43.700 12:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:43.700 12:08:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:43.700 12:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.700 12:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.700 12:08:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:43.700 12:08:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.700 12:08:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.700 12:08:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.700 12:08:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.700 12:08:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.958 12:08:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:43.958 12:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:43.958 12:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.958 12:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:43.958 12:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:43.958 12:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.958 12:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:43.958 12:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:43.958 12:08:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:43.958 12:08:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:43.958 12:08:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:43.958 12:08:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:43.958 12:08:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:44.216 12:08:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:44.474 [2024-12-05 12:08:15.229805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.474 [2024-12-05 12:08:15.274393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.475 [2024-12-05 12:08:15.274406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.475 [2024-12-05 12:08:15.329374] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:44.475 [2024-12-05 12:08:15.329434] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:47.760 spdk_app_start Round 1 00:04:47.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.760 12:08:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:47.761 12:08:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:47.761 12:08:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60045 /var/tmp/spdk-nbd.sock 00:04:47.761 12:08:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60045 ']' 00:04:47.761 12:08:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.761 12:08:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.761 12:08:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.761 12:08:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.761 12:08:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.761 12:08:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.761 12:08:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:47.761 12:08:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.761 Malloc0 00:04:47.761 12:08:18 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.019 Malloc1 00:04:48.019 12:08:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.019 12:08:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.019 12:08:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.278 12:08:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.278 /dev/nbd0 00:04:48.538 12:08:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:48.538 12:08:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.538 1+0 records in 00:04:48.538 1+0 records out 00:04:48.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213009 s, 19.2 MB/s 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.538 12:08:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.538 12:08:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.538 12:08:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.538 12:08:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.797 /dev/nbd1 00:04:48.797 12:08:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.797 12:08:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.797 1+0 records in 00:04:48.797 1+0 records out 00:04:48.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299237 s, 13.7 MB/s 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.797 12:08:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.797 12:08:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.797 12:08:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.797 12:08:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.797 12:08:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.797 12:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:49.055 { 00:04:49.055 "bdev_name": "Malloc0", 00:04:49.055 "nbd_device": "/dev/nbd0" 00:04:49.055 }, 00:04:49.055 { 00:04:49.055 "bdev_name": "Malloc1", 00:04:49.055 "nbd_device": "/dev/nbd1" 00:04:49.055 } 00:04:49.055 ]' 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:49.055 { 00:04:49.055 "bdev_name": "Malloc0", 00:04:49.055 "nbd_device": "/dev/nbd0" 00:04:49.055 }, 00:04:49.055 { 00:04:49.055 "bdev_name": "Malloc1", 00:04:49.055 "nbd_device": "/dev/nbd1" 00:04:49.055 } 00:04:49.055 ]' 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.055 /dev/nbd1' 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.055 /dev/nbd1' 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.055 256+0 records in 00:04:49.055 256+0 records out 00:04:49.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00735505 s, 143 MB/s 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.055 256+0 records in 00:04:49.055 256+0 records out 00:04:49.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023039 s, 45.5 MB/s 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.055 12:08:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.313 256+0 records in 00:04:49.313 256+0 records out 00:04:49.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231604 s, 45.3 MB/s 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.314 12:08:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.572 12:08:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.572 12:08:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.572 12:08:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.572 12:08:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.572 12:08:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.572 12:08:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.572 12:08:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.572 12:08:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.572 12:08:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.572 12:08:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.829 12:08:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.829 12:08:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.829 12:08:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.829 12:08:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.829 12:08:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.829 12:08:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.829 12:08:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.829 12:08:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.829 12:08:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.829 12:08:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.829 12:08:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.099 12:08:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.099 12:08:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.099 12:08:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.099 12:08:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:50.099 12:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.099 12:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:50.099 12:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:50.099 12:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:50.099 12:08:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:50.099 12:08:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:50.099 12:08:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:50.099 12:08:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:50.099 12:08:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.377 12:08:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:50.635 [2024-12-05 12:08:21.323224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.635 [2024-12-05 12:08:21.357639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.635 [2024-12-05 12:08:21.357651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.635 [2024-12-05 12:08:21.411868] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.635 [2024-12-05 12:08:21.411950] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:53.920 12:08:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.920 spdk_app_start Round 2 00:04:53.920 12:08:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:53.920 12:08:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60045 /var/tmp/spdk-nbd.sock 00:04:53.920 12:08:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60045 ']' 00:04:53.920 12:08:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.920 12:08:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.920 12:08:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.920 12:08:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.920 12:08:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.920 12:08:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.920 12:08:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:53.920 12:08:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.920 Malloc0 00:04:53.920 12:08:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.178 Malloc1 00:04:54.178 12:08:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.178 12:08:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.178 12:08:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.178 12:08:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.178 12:08:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.178 12:08:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.178 12:08:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.179 12:08:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.179 12:08:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.179 12:08:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.179 12:08:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.179 12:08:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.179 12:08:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.179 12:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.179 12:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.179 12:08:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.437 /dev/nbd0 00:04:54.694 12:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.694 12:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.694 1+0 records in 00:04:54.694 1+0 records out 00:04:54.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353234 s, 11.6 MB/s 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.694 12:08:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.694 12:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.694 12:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.694 12:08:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.952 /dev/nbd1 00:04:54.952 12:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.952 12:08:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.952 1+0 records in 00:04:54.952 1+0 records out 00:04:54.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034151 s, 12.0 MB/s 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.952 12:08:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.952 12:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.952 12:08:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.952 12:08:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.952 12:08:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.952 12:08:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.210 { 00:04:55.210 "bdev_name": "Malloc0", 00:04:55.210 "nbd_device": "/dev/nbd0" 00:04:55.210 }, 00:04:55.210 { 00:04:55.210 "bdev_name": "Malloc1", 00:04:55.210 "nbd_device": "/dev/nbd1" 00:04:55.210 } 00:04:55.210 ]' 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.210 { 00:04:55.210 "bdev_name": "Malloc0", 00:04:55.210 "nbd_device": "/dev/nbd0" 00:04:55.210 }, 00:04:55.210 { 00:04:55.210 "bdev_name": "Malloc1", 00:04:55.210 "nbd_device": "/dev/nbd1" 00:04:55.210 } 00:04:55.210 ]' 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.210 /dev/nbd1' 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.210 /dev/nbd1' 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.210 256+0 records in 00:04:55.210 256+0 records out 00:04:55.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00803273 s, 131 MB/s 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.210 256+0 records in 00:04:55.210 256+0 records out 00:04:55.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248435 s, 42.2 MB/s 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.210 256+0 records in 00:04:55.210 256+0 records out 00:04:55.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265703 s, 39.5 MB/s 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.210 12:08:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.467 12:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.467 12:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.467 12:08:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.468 12:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.468 12:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.468 12:08:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.468 12:08:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.468 12:08:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.468 12:08:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.468 12:08:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.032 12:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.290 12:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.290 12:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.290 12:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.290 12:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:56.290 12:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.290 12:08:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.290 12:08:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.290 12:08:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.290 12:08:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.290 12:08:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.548 12:08:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.548 [2024-12-05 12:08:27.415672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.805 [2024-12-05 12:08:27.449963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.805 [2024-12-05 12:08:27.449975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.805 [2024-12-05 12:08:27.503565] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.806 [2024-12-05 12:08:27.503649] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.091 12:08:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60045 /var/tmp/spdk-nbd.sock 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60045 ']' 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:00.091 12:08:30 event.app_repeat -- event/event.sh@39 -- # killprocess 60045 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60045 ']' 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60045 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60045 00:05:00.091 killing process with pid 60045 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60045' 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60045 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60045 00:05:00.091 spdk_app_start is called in Round 0. 00:05:00.091 Shutdown signal received, stop current app iteration 00:05:00.091 Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 reinitialization... 00:05:00.091 spdk_app_start is called in Round 1. 00:05:00.091 Shutdown signal received, stop current app iteration 00:05:00.091 Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 reinitialization... 00:05:00.091 spdk_app_start is called in Round 2. 00:05:00.091 Shutdown signal received, stop current app iteration 00:05:00.091 Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 reinitialization... 00:05:00.091 spdk_app_start is called in Round 3. 00:05:00.091 Shutdown signal received, stop current app iteration 00:05:00.091 12:08:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:00.091 12:08:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:00.091 00:05:00.091 real 0m18.930s 00:05:00.091 user 0m43.089s 00:05:00.091 sys 0m3.028s 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.091 12:08:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.091 ************************************ 00:05:00.091 END TEST app_repeat 00:05:00.091 ************************************ 00:05:00.091 12:08:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:00.091 12:08:30 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:00.091 12:08:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.091 12:08:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.091 12:08:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.091 ************************************ 00:05:00.091 START TEST cpu_locks 00:05:00.091 ************************************ 00:05:00.091 12:08:30 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:00.091 * Looking for test storage... 00:05:00.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:00.091 12:08:30 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.091 12:08:30 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.091 12:08:30 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.350 12:08:30 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.350 12:08:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:00.351 12:08:30 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.351 12:08:30 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.351 12:08:30 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.351 12:08:30 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:00.351 12:08:30 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.351 12:08:30 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.351 --rc genhtml_branch_coverage=1 00:05:00.351 --rc genhtml_function_coverage=1 00:05:00.351 --rc genhtml_legend=1 00:05:00.351 --rc geninfo_all_blocks=1 00:05:00.351 --rc geninfo_unexecuted_blocks=1 00:05:00.351 00:05:00.351 ' 00:05:00.351 12:08:30 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.351 --rc genhtml_branch_coverage=1 00:05:00.351 --rc genhtml_function_coverage=1 00:05:00.351 --rc genhtml_legend=1 00:05:00.351 --rc geninfo_all_blocks=1 00:05:00.351 --rc geninfo_unexecuted_blocks=1 00:05:00.351 00:05:00.351 ' 00:05:00.351 12:08:30 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.351 --rc genhtml_branch_coverage=1 00:05:00.351 --rc genhtml_function_coverage=1 00:05:00.351 --rc genhtml_legend=1 00:05:00.351 --rc geninfo_all_blocks=1 00:05:00.351 --rc geninfo_unexecuted_blocks=1 00:05:00.351 00:05:00.351 ' 00:05:00.351 12:08:30 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.351 --rc genhtml_branch_coverage=1 00:05:00.351 --rc genhtml_function_coverage=1 00:05:00.351 --rc genhtml_legend=1 00:05:00.351 --rc geninfo_all_blocks=1 00:05:00.351 --rc geninfo_unexecuted_blocks=1 00:05:00.351 00:05:00.351 ' 00:05:00.351 12:08:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:00.351 12:08:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:00.351 12:08:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:00.351 12:08:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:00.351 12:08:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.351 12:08:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.351 12:08:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.351 ************************************ 00:05:00.351 START TEST default_locks 00:05:00.351 ************************************ 00:05:00.351 12:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:00.351 12:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60664 00:05:00.351 12:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60664 00:05:00.351 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60664 ']' 00:05:00.351 12:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.351 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.351 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.351 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.351 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.351 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.351 [2024-12-05 12:08:31.071487] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:00.351 [2024-12-05 12:08:31.071607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60664 ] 00:05:00.351 [2024-12-05 12:08:31.214974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.610 [2024-12-05 12:08:31.264162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.868 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.868 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:00.868 12:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60664 00:05:00.868 12:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60664 00:05:00.868 12:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:01.127 12:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60664 00:05:01.127 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60664 ']' 00:05:01.127 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60664 00:05:01.127 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:01.127 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.127 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60664 00:05:01.127 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.127 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.127 killing process with pid 60664 00:05:01.127 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60664' 00:05:01.127 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60664 00:05:01.127 12:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60664 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60664 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60664 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60664 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60664 ']' 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.694 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60664) - No such process 00:05:01.694 ERROR: process (pid: 60664) is no longer running 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:01.694 00:05:01.694 real 0m1.352s 00:05:01.694 user 0m1.293s 00:05:01.694 sys 0m0.554s 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.694 12:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.694 ************************************ 00:05:01.694 END TEST default_locks 00:05:01.694 ************************************ 00:05:01.694 12:08:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:01.694 12:08:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.694 12:08:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.694 12:08:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.694 ************************************ 00:05:01.694 START TEST default_locks_via_rpc 00:05:01.694 ************************************ 00:05:01.694 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:01.694 12:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60720 00:05:01.694 12:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60720 00:05:01.694 12:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.694 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60720 ']' 00:05:01.694 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.694 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.694 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.694 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.694 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.694 [2024-12-05 12:08:32.459291] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:01.694 [2024-12-05 12:08:32.459402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60720 ] 00:05:01.953 [2024-12-05 12:08:32.594916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.953 [2024-12-05 12:08:32.638323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60720 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60720 00:05:02.212 12:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.779 12:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60720 00:05:02.779 12:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60720 ']' 00:05:02.779 12:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60720 00:05:02.779 12:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:02.779 12:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.779 12:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60720 00:05:02.779 12:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.779 12:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.779 killing process with pid 60720 00:05:02.779 12:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60720' 00:05:02.779 12:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60720 00:05:02.779 12:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60720 00:05:03.038 00:05:03.038 real 0m1.331s 00:05:03.038 user 0m1.307s 00:05:03.038 sys 0m0.513s 00:05:03.038 12:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.038 12:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.038 ************************************ 00:05:03.038 END TEST default_locks_via_rpc 00:05:03.038 ************************************ 00:05:03.038 12:08:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:03.038 12:08:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.038 12:08:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.038 12:08:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.038 ************************************ 00:05:03.038 START TEST non_locking_app_on_locked_coremask 00:05:03.038 ************************************ 00:05:03.038 12:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:03.038 12:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60770 00:05:03.038 12:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60770 /var/tmp/spdk.sock 00:05:03.038 12:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.038 12:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60770 ']' 00:05:03.038 12:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.038 12:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.038 12:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.038 12:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.038 12:08:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.038 [2024-12-05 12:08:33.853647] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:03.038 [2024-12-05 12:08:33.853747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60770 ] 00:05:03.296 [2024-12-05 12:08:33.997245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.296 [2024-12-05 12:08:34.038756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.554 12:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.554 12:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:03.554 12:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60785 00:05:03.554 12:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60785 /var/tmp/spdk2.sock 00:05:03.554 12:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:03.554 12:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60785 ']' 00:05:03.554 12:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.554 12:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.554 12:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.554 12:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.554 12:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.554 [2024-12-05 12:08:34.372454] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:03.554 [2024-12-05 12:08:34.372546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60785 ] 00:05:03.812 [2024-12-05 12:08:34.534101] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.812 [2024-12-05 12:08:34.534154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.812 [2024-12-05 12:08:34.639150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.747 12:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.747 12:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.747 12:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60770 00:05:04.747 12:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60770 00:05:04.747 12:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.688 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60770 00:05:05.689 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60770 ']' 00:05:05.689 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60770 00:05:05.689 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.689 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.689 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60770 00:05:05.689 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.689 killing process with pid 60770 00:05:05.689 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.689 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60770' 00:05:05.689 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60770 00:05:05.689 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60770 00:05:06.255 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60785 00:05:06.255 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60785 ']' 00:05:06.255 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60785 00:05:06.255 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:06.255 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.255 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60785 00:05:06.255 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.255 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.255 killing process with pid 60785 00:05:06.255 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60785' 00:05:06.255 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60785 00:05:06.255 12:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60785 00:05:06.514 00:05:06.514 real 0m3.556s 00:05:06.514 user 0m3.895s 00:05:06.514 sys 0m1.084s 00:05:06.514 12:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.514 12:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.514 ************************************ 00:05:06.514 END TEST non_locking_app_on_locked_coremask 00:05:06.514 ************************************ 00:05:06.514 12:08:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:06.514 12:08:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.514 12:08:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.514 12:08:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.772 ************************************ 00:05:06.772 START TEST locking_app_on_unlocked_coremask 00:05:06.772 ************************************ 00:05:06.772 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:06.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.772 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60864 00:05:06.772 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60864 /var/tmp/spdk.sock 00:05:06.772 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60864 ']' 00:05:06.772 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.772 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:06.772 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.772 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.772 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.772 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.772 [2024-12-05 12:08:37.459820] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:06.772 [2024-12-05 12:08:37.459916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60864 ] 00:05:06.772 [2024-12-05 12:08:37.605506] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.772 [2024-12-05 12:08:37.605576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.030 [2024-12-05 12:08:37.647596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.307 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.307 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:07.307 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60878 00:05:07.307 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60878 /var/tmp/spdk2.sock 00:05:07.307 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.307 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60878 ']' 00:05:07.307 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.307 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.307 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.307 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.307 12:08:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.307 [2024-12-05 12:08:37.994157] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:07.308 [2024-12-05 12:08:37.994656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60878 ] 00:05:07.308 [2024-12-05 12:08:38.152778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.566 [2024-12-05 12:08:38.263284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.155 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.155 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.155 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60878 00:05:08.155 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60878 00:05:08.155 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.090 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60864 00:05:09.090 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60864 ']' 00:05:09.090 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60864 00:05:09.090 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:09.090 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.090 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60864 00:05:09.090 killing process with pid 60864 00:05:09.090 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.090 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.090 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60864' 00:05:09.090 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60864 00:05:09.090 12:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60864 00:05:10.027 12:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60878 00:05:10.027 12:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60878 ']' 00:05:10.027 12:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60878 00:05:10.027 12:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:10.027 12:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.027 12:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60878 00:05:10.027 killing process with pid 60878 00:05:10.027 12:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.027 12:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.027 12:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60878' 00:05:10.027 12:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60878 00:05:10.027 12:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60878 00:05:10.287 ************************************ 00:05:10.287 END TEST locking_app_on_unlocked_coremask 00:05:10.287 ************************************ 00:05:10.287 00:05:10.287 real 0m3.673s 00:05:10.287 user 0m4.040s 00:05:10.287 sys 0m1.113s 00:05:10.287 12:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.287 12:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.287 12:08:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:10.287 12:08:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.287 12:08:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.287 12:08:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.287 ************************************ 00:05:10.287 START TEST locking_app_on_locked_coremask 00:05:10.287 ************************************ 00:05:10.287 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:10.287 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60957 00:05:10.287 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.287 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60957 /var/tmp/spdk.sock 00:05:10.287 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60957 ']' 00:05:10.287 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.287 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.287 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.287 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.287 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.546 [2024-12-05 12:08:41.171354] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:10.546 [2024-12-05 12:08:41.171659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60957 ] 00:05:10.546 [2024-12-05 12:08:41.309603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.546 [2024-12-05 12:08:41.359941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60972 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60972 /var/tmp/spdk2.sock 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60972 /var/tmp/spdk2.sock 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60972 /var/tmp/spdk2.sock 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60972 ']' 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.805 12:08:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.064 [2024-12-05 12:08:41.706335] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:11.064 [2024-12-05 12:08:41.706658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60972 ] 00:05:11.064 [2024-12-05 12:08:41.863869] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60957 has claimed it. 00:05:11.064 [2024-12-05 12:08:41.863941] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:11.632 ERROR: process (pid: 60972) is no longer running 00:05:11.632 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60972) - No such process 00:05:11.632 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.632 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:11.632 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:11.632 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:11.632 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:11.632 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:11.632 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60957 00:05:11.632 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60957 00:05:11.632 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.200 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60957 00:05:12.200 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60957 ']' 00:05:12.200 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60957 00:05:12.200 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:12.200 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.200 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60957 00:05:12.200 killing process with pid 60957 00:05:12.200 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.200 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.200 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60957' 00:05:12.200 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60957 00:05:12.200 12:08:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60957 00:05:12.486 00:05:12.486 real 0m2.068s 00:05:12.486 user 0m2.315s 00:05:12.486 sys 0m0.600s 00:05:12.486 ************************************ 00:05:12.486 END TEST locking_app_on_locked_coremask 00:05:12.486 ************************************ 00:05:12.486 12:08:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.486 12:08:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.486 12:08:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:12.486 12:08:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.486 12:08:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.486 12:08:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.486 ************************************ 00:05:12.486 START TEST locking_overlapped_coremask 00:05:12.486 ************************************ 00:05:12.486 12:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:12.486 12:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61023 00:05:12.486 12:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:12.486 12:08:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61023 /var/tmp/spdk.sock 00:05:12.486 12:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61023 ']' 00:05:12.486 12:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.486 12:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.486 12:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.486 12:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.486 12:08:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.486 [2024-12-05 12:08:43.303515] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:12.486 [2024-12-05 12:08:43.303852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61023 ] 00:05:12.745 [2024-12-05 12:08:43.446809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.745 [2024-12-05 12:08:43.507223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.745 [2024-12-05 12:08:43.507380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.745 [2024-12-05 12:08:43.507385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61053 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61053 /var/tmp/spdk2.sock 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61053 /var/tmp/spdk2.sock 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61053 /var/tmp/spdk2.sock 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61053 ']' 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.681 12:08:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.681 [2024-12-05 12:08:44.348970] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:13.681 [2024-12-05 12:08:44.349254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61053 ] 00:05:13.681 [2024-12-05 12:08:44.506344] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61023 has claimed it. 00:05:13.681 [2024-12-05 12:08:44.506411] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:14.249 ERROR: process (pid: 61053) is no longer running 00:05:14.249 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61053) - No such process 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61023 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61023 ']' 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61023 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.249 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61023 00:05:14.507 killing process with pid 61023 00:05:14.508 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.508 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.508 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61023' 00:05:14.508 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61023 00:05:14.508 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61023 00:05:14.765 ************************************ 00:05:14.765 END TEST locking_overlapped_coremask 00:05:14.765 ************************************ 00:05:14.765 00:05:14.765 real 0m2.289s 00:05:14.765 user 0m6.582s 00:05:14.765 sys 0m0.441s 00:05:14.765 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.765 12:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.765 12:08:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:14.765 12:08:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.765 12:08:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.765 12:08:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.765 ************************************ 00:05:14.765 START TEST locking_overlapped_coremask_via_rpc 00:05:14.765 ************************************ 00:05:14.765 12:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:14.765 12:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61105 00:05:14.765 12:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61105 /var/tmp/spdk.sock 00:05:14.765 12:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:14.766 12:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61105 ']' 00:05:14.766 12:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.766 12:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.766 12:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.766 12:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.766 12:08:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.766 [2024-12-05 12:08:45.632924] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:14.766 [2024-12-05 12:08:45.633194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61105 ] 00:05:15.023 [2024-12-05 12:08:45.770752] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.023 [2024-12-05 12:08:45.770786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.023 [2024-12-05 12:08:45.832885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.023 [2024-12-05 12:08:45.833007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.023 [2024-12-05 12:08:45.833011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.961 12:08:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.961 12:08:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:15.961 12:08:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61135 00:05:15.961 12:08:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:15.961 12:08:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61135 /var/tmp/spdk2.sock 00:05:15.961 12:08:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61135 ']' 00:05:15.961 12:08:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.961 12:08:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.961 12:08:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.961 12:08:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.961 12:08:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.961 [2024-12-05 12:08:46.679856] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:15.961 [2024-12-05 12:08:46.680197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61135 ] 00:05:16.219 [2024-12-05 12:08:46.835220] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.219 [2024-12-05 12:08:46.835274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.219 [2024-12-05 12:08:46.951351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.219 [2024-12-05 12:08:46.951463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.219 [2024-12-05 12:08:46.951463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:17.152 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.152 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.152 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:17.152 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.152 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.152 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.152 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.152 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:17.152 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.152 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:17.152 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.153 [2024-12-05 12:08:47.709469] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61105 has claimed it. 00:05:17.153 2024/12/05 12:08:47 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:17.153 request: 00:05:17.153 { 00:05:17.153 "method": "framework_enable_cpumask_locks", 00:05:17.153 "params": {} 00:05:17.153 } 00:05:17.153 Got JSON-RPC error response 00:05:17.153 GoRPCClient: error on JSON-RPC call 00:05:17.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61105 /var/tmp/spdk.sock 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61105 ']' 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61135 /var/tmp/spdk2.sock 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61135 ']' 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.153 12:08:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.718 ************************************ 00:05:17.718 END TEST locking_overlapped_coremask_via_rpc 00:05:17.718 ************************************ 00:05:17.718 12:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.718 12:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.718 12:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:17.718 12:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:17.718 12:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:17.718 12:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:17.718 00:05:17.718 real 0m2.715s 00:05:17.718 user 0m1.430s 00:05:17.718 sys 0m0.222s 00:05:17.718 12:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.718 12:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.718 12:08:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:17.718 12:08:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61105 ]] 00:05:17.718 12:08:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61105 00:05:17.718 12:08:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61105 ']' 00:05:17.718 12:08:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61105 00:05:17.718 12:08:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:17.718 12:08:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.718 12:08:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61105 00:05:17.718 killing process with pid 61105 00:05:17.718 12:08:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.718 12:08:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.718 12:08:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61105' 00:05:17.718 12:08:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61105 00:05:17.718 12:08:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61105 00:05:17.976 12:08:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61135 ]] 00:05:17.976 12:08:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61135 00:05:17.976 12:08:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61135 ']' 00:05:17.976 12:08:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61135 00:05:17.976 12:08:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:17.976 12:08:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.976 12:08:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61135 00:05:17.976 killing process with pid 61135 00:05:17.976 12:08:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:17.976 12:08:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:17.976 12:08:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61135' 00:05:17.976 12:08:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61135 00:05:17.976 12:08:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61135 00:05:18.542 12:08:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:18.542 Process with pid 61105 is not found 00:05:18.542 12:08:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:18.542 12:08:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61105 ]] 00:05:18.542 12:08:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61105 00:05:18.542 12:08:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61105 ']' 00:05:18.542 12:08:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61105 00:05:18.542 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61105) - No such process 00:05:18.542 12:08:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61105 is not found' 00:05:18.542 12:08:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61135 ]] 00:05:18.542 12:08:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61135 00:05:18.542 Process with pid 61135 is not found 00:05:18.542 12:08:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61135 ']' 00:05:18.542 12:08:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61135 00:05:18.542 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61135) - No such process 00:05:18.542 12:08:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61135 is not found' 00:05:18.542 12:08:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:18.542 ************************************ 00:05:18.542 END TEST cpu_locks 00:05:18.542 ************************************ 00:05:18.542 00:05:18.542 real 0m18.328s 00:05:18.542 user 0m33.982s 00:05:18.542 sys 0m5.400s 00:05:18.542 12:08:49 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.542 12:08:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.542 ************************************ 00:05:18.542 END TEST event 00:05:18.542 ************************************ 00:05:18.542 00:05:18.542 real 0m45.445s 00:05:18.542 user 1m29.311s 00:05:18.542 sys 0m9.194s 00:05:18.542 12:08:49 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.542 12:08:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.542 12:08:49 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:18.542 12:08:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.542 12:08:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.542 12:08:49 -- common/autotest_common.sh@10 -- # set +x 00:05:18.542 ************************************ 00:05:18.542 START TEST thread 00:05:18.542 ************************************ 00:05:18.542 12:08:49 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:18.542 * Looking for test storage... 00:05:18.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:18.542 12:08:49 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.542 12:08:49 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.542 12:08:49 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.542 12:08:49 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.542 12:08:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.542 12:08:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.542 12:08:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.542 12:08:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.542 12:08:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.542 12:08:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.542 12:08:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.542 12:08:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.542 12:08:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.542 12:08:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.542 12:08:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.542 12:08:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:18.542 12:08:49 thread -- scripts/common.sh@345 -- # : 1 00:05:18.542 12:08:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.542 12:08:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.542 12:08:49 thread -- scripts/common.sh@365 -- # decimal 1 00:05:18.542 12:08:49 thread -- scripts/common.sh@353 -- # local d=1 00:05:18.542 12:08:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.542 12:08:49 thread -- scripts/common.sh@355 -- # echo 1 00:05:18.542 12:08:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.542 12:08:49 thread -- scripts/common.sh@366 -- # decimal 2 00:05:18.542 12:08:49 thread -- scripts/common.sh@353 -- # local d=2 00:05:18.542 12:08:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.542 12:08:49 thread -- scripts/common.sh@355 -- # echo 2 00:05:18.542 12:08:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.542 12:08:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.542 12:08:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.542 12:08:49 thread -- scripts/common.sh@368 -- # return 0 00:05:18.542 12:08:49 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.542 12:08:49 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.542 --rc genhtml_branch_coverage=1 00:05:18.542 --rc genhtml_function_coverage=1 00:05:18.542 --rc genhtml_legend=1 00:05:18.542 --rc geninfo_all_blocks=1 00:05:18.542 --rc geninfo_unexecuted_blocks=1 00:05:18.542 00:05:18.542 ' 00:05:18.542 12:08:49 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.542 --rc genhtml_branch_coverage=1 00:05:18.543 --rc genhtml_function_coverage=1 00:05:18.543 --rc genhtml_legend=1 00:05:18.543 --rc geninfo_all_blocks=1 00:05:18.543 --rc geninfo_unexecuted_blocks=1 00:05:18.543 00:05:18.543 ' 00:05:18.543 12:08:49 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.543 --rc genhtml_branch_coverage=1 00:05:18.543 --rc genhtml_function_coverage=1 00:05:18.543 --rc genhtml_legend=1 00:05:18.543 --rc geninfo_all_blocks=1 00:05:18.543 --rc geninfo_unexecuted_blocks=1 00:05:18.543 00:05:18.543 ' 00:05:18.543 12:08:49 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.543 --rc genhtml_branch_coverage=1 00:05:18.543 --rc genhtml_function_coverage=1 00:05:18.543 --rc genhtml_legend=1 00:05:18.543 --rc geninfo_all_blocks=1 00:05:18.543 --rc geninfo_unexecuted_blocks=1 00:05:18.543 00:05:18.543 ' 00:05:18.543 12:08:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:18.543 12:08:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:18.543 12:08:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.543 12:08:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.543 ************************************ 00:05:18.543 START TEST thread_poller_perf 00:05:18.543 ************************************ 00:05:18.543 12:08:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:18.800 [2024-12-05 12:08:49.426885] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:18.800 [2024-12-05 12:08:49.427846] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61289 ] 00:05:18.800 [2024-12-05 12:08:49.574401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.800 [2024-12-05 12:08:49.616352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.800 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:20.171 [2024-12-05T12:08:51.040Z] ====================================== 00:05:20.171 [2024-12-05T12:08:51.040Z] busy:2208968210 (cyc) 00:05:20.171 [2024-12-05T12:08:51.040Z] total_run_count: 381000 00:05:20.171 [2024-12-05T12:08:51.040Z] tsc_hz: 2200000000 (cyc) 00:05:20.171 [2024-12-05T12:08:51.040Z] ====================================== 00:05:20.171 [2024-12-05T12:08:51.040Z] poller_cost: 5797 (cyc), 2635 (nsec) 00:05:20.171 00:05:20.171 real 0m1.258s 00:05:20.171 user 0m1.101s 00:05:20.171 sys 0m0.048s 00:05:20.171 12:08:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.171 12:08:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.171 ************************************ 00:05:20.171 END TEST thread_poller_perf 00:05:20.171 ************************************ 00:05:20.171 12:08:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.171 12:08:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:20.171 12:08:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.171 12:08:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.172 ************************************ 00:05:20.172 START TEST thread_poller_perf 00:05:20.172 ************************************ 00:05:20.172 12:08:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.172 [2024-12-05 12:08:50.736915] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:20.172 [2024-12-05 12:08:50.737010] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61325 ] 00:05:20.172 [2024-12-05 12:08:50.880906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.172 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:20.172 [2024-12-05 12:08:50.919568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.103 [2024-12-05T12:08:51.972Z] ====================================== 00:05:21.103 [2024-12-05T12:08:51.972Z] busy:2202044290 (cyc) 00:05:21.103 [2024-12-05T12:08:51.972Z] total_run_count: 4946000 00:05:21.103 [2024-12-05T12:08:51.972Z] tsc_hz: 2200000000 (cyc) 00:05:21.103 [2024-12-05T12:08:51.972Z] ====================================== 00:05:21.103 [2024-12-05T12:08:51.972Z] poller_cost: 445 (cyc), 202 (nsec) 00:05:21.103 00:05:21.103 real 0m1.247s 00:05:21.103 user 0m1.101s 00:05:21.103 sys 0m0.039s 00:05:21.103 12:08:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.103 ************************************ 00:05:21.103 END TEST thread_poller_perf 00:05:21.103 ************************************ 00:05:21.103 12:08:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.361 12:08:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:21.361 00:05:21.361 real 0m2.784s 00:05:21.361 user 0m2.350s 00:05:21.361 sys 0m0.217s 00:05:21.361 12:08:52 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.361 12:08:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.361 ************************************ 00:05:21.361 END TEST thread 00:05:21.361 ************************************ 00:05:21.361 12:08:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:21.361 12:08:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:21.361 12:08:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.361 12:08:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.361 12:08:52 -- common/autotest_common.sh@10 -- # set +x 00:05:21.361 ************************************ 00:05:21.361 START TEST app_cmdline 00:05:21.361 ************************************ 00:05:21.361 12:08:52 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:21.361 * Looking for test storage... 00:05:21.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:21.361 12:08:52 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.361 12:08:52 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.361 12:08:52 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.620 12:08:52 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.620 12:08:52 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:21.620 12:08:52 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.620 12:08:52 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.620 --rc genhtml_branch_coverage=1 00:05:21.620 --rc genhtml_function_coverage=1 00:05:21.620 --rc genhtml_legend=1 00:05:21.620 --rc geninfo_all_blocks=1 00:05:21.620 --rc geninfo_unexecuted_blocks=1 00:05:21.620 00:05:21.620 ' 00:05:21.620 12:08:52 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.620 --rc genhtml_branch_coverage=1 00:05:21.620 --rc genhtml_function_coverage=1 00:05:21.620 --rc genhtml_legend=1 00:05:21.620 --rc geninfo_all_blocks=1 00:05:21.620 --rc geninfo_unexecuted_blocks=1 00:05:21.620 00:05:21.620 ' 00:05:21.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.620 12:08:52 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.620 --rc genhtml_branch_coverage=1 00:05:21.620 --rc genhtml_function_coverage=1 00:05:21.620 --rc genhtml_legend=1 00:05:21.620 --rc geninfo_all_blocks=1 00:05:21.620 --rc geninfo_unexecuted_blocks=1 00:05:21.620 00:05:21.620 ' 00:05:21.620 12:08:52 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.620 --rc genhtml_branch_coverage=1 00:05:21.620 --rc genhtml_function_coverage=1 00:05:21.620 --rc genhtml_legend=1 00:05:21.620 --rc geninfo_all_blocks=1 00:05:21.620 --rc geninfo_unexecuted_blocks=1 00:05:21.620 00:05:21.620 ' 00:05:21.620 12:08:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:21.620 12:08:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61407 00:05:21.620 12:08:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61407 00:05:21.620 12:08:52 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61407 ']' 00:05:21.620 12:08:52 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.620 12:08:52 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.620 12:08:52 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.620 12:08:52 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:21.620 12:08:52 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.620 12:08:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:21.620 [2024-12-05 12:08:52.326910] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:21.620 [2024-12-05 12:08:52.327235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61407 ] 00:05:21.620 [2024-12-05 12:08:52.474546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.878 [2024-12-05 12:08:52.520005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.136 12:08:52 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.136 12:08:52 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:22.136 12:08:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:22.393 { 00:05:22.393 "fields": { 00:05:22.393 "commit": "85bc1e85a", 00:05:22.393 "major": 25, 00:05:22.393 "minor": 1, 00:05:22.393 "patch": 0, 00:05:22.393 "suffix": "-pre" 00:05:22.393 }, 00:05:22.393 "version": "SPDK v25.01-pre git sha1 85bc1e85a" 00:05:22.393 } 00:05:22.393 12:08:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:22.393 12:08:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:22.393 12:08:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:22.393 12:08:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:22.393 12:08:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:22.393 12:08:53 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.393 12:08:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:22.393 12:08:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:22.393 12:08:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:22.393 12:08:53 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.393 12:08:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:22.393 12:08:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:22.393 12:08:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:22.394 12:08:53 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:22.394 12:08:53 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:22.394 12:08:53 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:22.394 12:08:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.394 12:08:53 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:22.394 12:08:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.394 12:08:53 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:22.394 12:08:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.394 12:08:53 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:22.394 12:08:53 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:22.394 12:08:53 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:22.652 2024/12/05 12:08:53 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:05:22.652 request: 00:05:22.652 { 00:05:22.652 "method": "env_dpdk_get_mem_stats", 00:05:22.652 "params": {} 00:05:22.652 } 00:05:22.652 Got JSON-RPC error response 00:05:22.652 GoRPCClient: error on JSON-RPC call 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.652 12:08:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61407 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61407 ']' 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61407 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61407 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.652 killing process with pid 61407 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61407' 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 61407 00:05:22.652 12:08:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 61407 00:05:23.218 00:05:23.218 real 0m1.754s 00:05:23.218 user 0m2.110s 00:05:23.218 sys 0m0.502s 00:05:23.218 12:08:53 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.218 12:08:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.218 ************************************ 00:05:23.218 END TEST app_cmdline 00:05:23.218 ************************************ 00:05:23.218 12:08:53 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:23.218 12:08:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.218 12:08:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.218 12:08:53 -- common/autotest_common.sh@10 -- # set +x 00:05:23.218 ************************************ 00:05:23.218 START TEST version 00:05:23.218 ************************************ 00:05:23.218 12:08:53 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:23.218 * Looking for test storage... 00:05:23.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:23.218 12:08:53 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.218 12:08:53 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.218 12:08:53 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.218 12:08:54 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.218 12:08:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.218 12:08:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.218 12:08:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.218 12:08:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.218 12:08:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.218 12:08:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.218 12:08:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.218 12:08:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.218 12:08:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.218 12:08:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.218 12:08:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.218 12:08:54 version -- scripts/common.sh@344 -- # case "$op" in 00:05:23.218 12:08:54 version -- scripts/common.sh@345 -- # : 1 00:05:23.218 12:08:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.218 12:08:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.218 12:08:54 version -- scripts/common.sh@365 -- # decimal 1 00:05:23.218 12:08:54 version -- scripts/common.sh@353 -- # local d=1 00:05:23.218 12:08:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.218 12:08:54 version -- scripts/common.sh@355 -- # echo 1 00:05:23.218 12:08:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.218 12:08:54 version -- scripts/common.sh@366 -- # decimal 2 00:05:23.218 12:08:54 version -- scripts/common.sh@353 -- # local d=2 00:05:23.218 12:08:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.218 12:08:54 version -- scripts/common.sh@355 -- # echo 2 00:05:23.218 12:08:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.218 12:08:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.218 12:08:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.218 12:08:54 version -- scripts/common.sh@368 -- # return 0 00:05:23.218 12:08:54 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.218 12:08:54 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.218 --rc genhtml_branch_coverage=1 00:05:23.218 --rc genhtml_function_coverage=1 00:05:23.218 --rc genhtml_legend=1 00:05:23.218 --rc geninfo_all_blocks=1 00:05:23.218 --rc geninfo_unexecuted_blocks=1 00:05:23.218 00:05:23.218 ' 00:05:23.218 12:08:54 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.218 --rc genhtml_branch_coverage=1 00:05:23.218 --rc genhtml_function_coverage=1 00:05:23.218 --rc genhtml_legend=1 00:05:23.218 --rc geninfo_all_blocks=1 00:05:23.218 --rc geninfo_unexecuted_blocks=1 00:05:23.218 00:05:23.218 ' 00:05:23.218 12:08:54 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.218 --rc genhtml_branch_coverage=1 00:05:23.218 --rc genhtml_function_coverage=1 00:05:23.218 --rc genhtml_legend=1 00:05:23.218 --rc geninfo_all_blocks=1 00:05:23.218 --rc geninfo_unexecuted_blocks=1 00:05:23.218 00:05:23.218 ' 00:05:23.218 12:08:54 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.218 --rc genhtml_branch_coverage=1 00:05:23.218 --rc genhtml_function_coverage=1 00:05:23.218 --rc genhtml_legend=1 00:05:23.218 --rc geninfo_all_blocks=1 00:05:23.218 --rc geninfo_unexecuted_blocks=1 00:05:23.218 00:05:23.218 ' 00:05:23.218 12:08:54 version -- app/version.sh@17 -- # get_header_version major 00:05:23.218 12:08:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:23.218 12:08:54 version -- app/version.sh@14 -- # cut -f2 00:05:23.218 12:08:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.218 12:08:54 version -- app/version.sh@17 -- # major=25 00:05:23.218 12:08:54 version -- app/version.sh@18 -- # get_header_version minor 00:05:23.218 12:08:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:23.218 12:08:54 version -- app/version.sh@14 -- # cut -f2 00:05:23.218 12:08:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.218 12:08:54 version -- app/version.sh@18 -- # minor=1 00:05:23.218 12:08:54 version -- app/version.sh@19 -- # get_header_version patch 00:05:23.218 12:08:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:23.218 12:08:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.218 12:08:54 version -- app/version.sh@14 -- # cut -f2 00:05:23.218 12:08:54 version -- app/version.sh@19 -- # patch=0 00:05:23.218 12:08:54 version -- app/version.sh@20 -- # get_header_version suffix 00:05:23.218 12:08:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:23.218 12:08:54 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.218 12:08:54 version -- app/version.sh@14 -- # cut -f2 00:05:23.218 12:08:54 version -- app/version.sh@20 -- # suffix=-pre 00:05:23.218 12:08:54 version -- app/version.sh@22 -- # version=25.1 00:05:23.218 12:08:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:23.218 12:08:54 version -- app/version.sh@28 -- # version=25.1rc0 00:05:23.218 12:08:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:23.218 12:08:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:23.476 12:08:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:23.476 12:08:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:23.476 00:05:23.476 real 0m0.250s 00:05:23.476 user 0m0.174s 00:05:23.476 sys 0m0.115s 00:05:23.476 12:08:54 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.476 12:08:54 version -- common/autotest_common.sh@10 -- # set +x 00:05:23.476 ************************************ 00:05:23.476 END TEST version 00:05:23.476 ************************************ 00:05:23.476 12:08:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:23.476 12:08:54 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:23.476 12:08:54 -- spdk/autotest.sh@194 -- # uname -s 00:05:23.476 12:08:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:23.476 12:08:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:23.476 12:08:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:23.476 12:08:54 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:23.476 12:08:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:23.476 12:08:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:23.476 12:08:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.476 12:08:54 -- common/autotest_common.sh@10 -- # set +x 00:05:23.476 12:08:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:23.476 12:08:54 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:23.476 12:08:54 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:23.476 12:08:54 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:23.476 12:08:54 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:23.476 12:08:54 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:23.476 12:08:54 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:23.476 12:08:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:23.476 12:08:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.476 12:08:54 -- common/autotest_common.sh@10 -- # set +x 00:05:23.476 ************************************ 00:05:23.476 START TEST nvmf_tcp 00:05:23.476 ************************************ 00:05:23.476 12:08:54 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:23.476 * Looking for test storage... 00:05:23.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:05:23.476 12:08:54 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.476 12:08:54 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.476 12:08:54 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.735 12:08:54 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.735 12:08:54 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:23.735 12:08:54 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.735 12:08:54 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.735 --rc genhtml_branch_coverage=1 00:05:23.735 --rc genhtml_function_coverage=1 00:05:23.735 --rc genhtml_legend=1 00:05:23.735 --rc geninfo_all_blocks=1 00:05:23.735 --rc geninfo_unexecuted_blocks=1 00:05:23.735 00:05:23.735 ' 00:05:23.735 12:08:54 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.735 --rc genhtml_branch_coverage=1 00:05:23.735 --rc genhtml_function_coverage=1 00:05:23.735 --rc genhtml_legend=1 00:05:23.735 --rc geninfo_all_blocks=1 00:05:23.735 --rc geninfo_unexecuted_blocks=1 00:05:23.735 00:05:23.735 ' 00:05:23.735 12:08:54 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.735 --rc genhtml_branch_coverage=1 00:05:23.735 --rc genhtml_function_coverage=1 00:05:23.735 --rc genhtml_legend=1 00:05:23.735 --rc geninfo_all_blocks=1 00:05:23.735 --rc geninfo_unexecuted_blocks=1 00:05:23.735 00:05:23.735 ' 00:05:23.735 12:08:54 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.735 --rc genhtml_branch_coverage=1 00:05:23.735 --rc genhtml_function_coverage=1 00:05:23.735 --rc genhtml_legend=1 00:05:23.735 --rc geninfo_all_blocks=1 00:05:23.735 --rc geninfo_unexecuted_blocks=1 00:05:23.735 00:05:23.735 ' 00:05:23.735 12:08:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:23.736 12:08:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:23.736 12:08:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:23.736 12:08:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:23.736 12:08:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.736 12:08:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.736 ************************************ 00:05:23.736 START TEST nvmf_target_core 00:05:23.736 ************************************ 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:23.736 * Looking for test storage... 00:05:23.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.736 --rc genhtml_branch_coverage=1 00:05:23.736 --rc genhtml_function_coverage=1 00:05:23.736 --rc genhtml_legend=1 00:05:23.736 --rc geninfo_all_blocks=1 00:05:23.736 --rc geninfo_unexecuted_blocks=1 00:05:23.736 00:05:23.736 ' 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.736 --rc genhtml_branch_coverage=1 00:05:23.736 --rc genhtml_function_coverage=1 00:05:23.736 --rc genhtml_legend=1 00:05:23.736 --rc geninfo_all_blocks=1 00:05:23.736 --rc geninfo_unexecuted_blocks=1 00:05:23.736 00:05:23.736 ' 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.736 --rc genhtml_branch_coverage=1 00:05:23.736 --rc genhtml_function_coverage=1 00:05:23.736 --rc genhtml_legend=1 00:05:23.736 --rc geninfo_all_blocks=1 00:05:23.736 --rc geninfo_unexecuted_blocks=1 00:05:23.736 00:05:23.736 ' 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.736 --rc genhtml_branch_coverage=1 00:05:23.736 --rc genhtml_function_coverage=1 00:05:23.736 --rc genhtml_legend=1 00:05:23.736 --rc geninfo_all_blocks=1 00:05:23.736 --rc geninfo_unexecuted_blocks=1 00:05:23.736 00:05:23.736 ' 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.736 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.995 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:05:23.995 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:05:23.995 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.995 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.995 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:23.995 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.995 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:23.995 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.995 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.995 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.995 12:08:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.995 12:08:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.996 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:23.996 ************************************ 00:05:23.996 START TEST nvmf_abort 00:05:23.996 ************************************ 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:23.996 * Looking for test storage... 00:05:23.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.996 --rc genhtml_branch_coverage=1 00:05:23.996 --rc genhtml_function_coverage=1 00:05:23.996 --rc genhtml_legend=1 00:05:23.996 --rc geninfo_all_blocks=1 00:05:23.996 --rc geninfo_unexecuted_blocks=1 00:05:23.996 00:05:23.996 ' 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.996 --rc genhtml_branch_coverage=1 00:05:23.996 --rc genhtml_function_coverage=1 00:05:23.996 --rc genhtml_legend=1 00:05:23.996 --rc geninfo_all_blocks=1 00:05:23.996 --rc geninfo_unexecuted_blocks=1 00:05:23.996 00:05:23.996 ' 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.996 --rc genhtml_branch_coverage=1 00:05:23.996 --rc genhtml_function_coverage=1 00:05:23.996 --rc genhtml_legend=1 00:05:23.996 --rc geninfo_all_blocks=1 00:05:23.996 --rc geninfo_unexecuted_blocks=1 00:05:23.996 00:05:23.996 ' 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.996 --rc genhtml_branch_coverage=1 00:05:23.996 --rc genhtml_function_coverage=1 00:05:23.996 --rc genhtml_legend=1 00:05:23.996 --rc geninfo_all_blocks=1 00:05:23.996 --rc geninfo_unexecuted_blocks=1 00:05:23.996 00:05:23.996 ' 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.996 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.997 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:05:23.997 Cannot find device "nvmf_init_br" 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:05:23.997 Cannot find device "nvmf_init_br2" 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:05:23.997 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:05:24.255 Cannot find device "nvmf_tgt_br" 00:05:24.255 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:05:24.255 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:05:24.255 Cannot find device "nvmf_tgt_br2" 00:05:24.255 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:05:24.256 Cannot find device "nvmf_init_br" 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:05:24.256 Cannot find device "nvmf_init_br2" 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:05:24.256 Cannot find device "nvmf_tgt_br" 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:05:24.256 Cannot find device "nvmf_tgt_br2" 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:05:24.256 Cannot find device "nvmf_br" 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:05:24.256 Cannot find device "nvmf_init_if" 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:05:24.256 Cannot find device "nvmf_init_if2" 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:05:24.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:05:24.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:05:24.256 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:05:24.256 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:05:24.514 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:05:24.514 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:05:24.514 00:05:24.514 --- 10.0.0.3 ping statistics --- 00:05:24.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.514 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:05:24.514 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:05:24.514 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:05:24.514 00:05:24.514 --- 10.0.0.4 ping statistics --- 00:05:24.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.514 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:05:24.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:24.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:05:24.514 00:05:24.514 --- 10.0.0.1 ping statistics --- 00:05:24.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.514 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:05:24.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:24.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:05:24.514 00:05:24.514 --- 10.0.0.2 ping statistics --- 00:05:24.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.514 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=61827 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 61827 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 61827 ']' 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.514 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.772 [2024-12-05 12:08:55.421679] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:24.772 [2024-12-05 12:08:55.421759] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:24.772 [2024-12-05 12:08:55.568969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.772 [2024-12-05 12:08:55.627011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:24.772 [2024-12-05 12:08:55.627088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:24.772 [2024-12-05 12:08:55.627103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:24.772 [2024-12-05 12:08:55.627113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:24.772 [2024-12-05 12:08:55.627123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:24.772 [2024-12-05 12:08:55.628418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.772 [2024-12-05 12:08:55.628565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.772 [2024-12-05 12:08:55.628573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.030 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.030 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:25.030 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:25.030 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.030 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.030 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:25.030 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:25.030 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.030 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.030 [2024-12-05 12:08:55.812639] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.030 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.030 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.031 Malloc0 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.031 Delay0 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.031 [2024-12-05 12:08:55.888693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.031 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:25.288 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.288 12:08:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:25.288 [2024-12-05 12:08:56.085104] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:27.830 Initializing NVMe Controllers 00:05:27.830 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:05:27.830 controller IO queue size 128 less than required 00:05:27.830 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:27.830 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:27.830 Initialization complete. Launching workers. 00:05:27.830 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28372 00:05:27.830 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28433, failed to submit 62 00:05:27.830 success 28376, unsuccessful 57, failed 0 00:05:27.830 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:27.830 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.830 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.830 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.830 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:27.831 rmmod nvme_tcp 00:05:27.831 rmmod nvme_fabrics 00:05:27.831 rmmod nvme_keyring 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 61827 ']' 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 61827 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 61827 ']' 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 61827 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61827 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:27.831 killing process with pid 61827 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61827' 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 61827 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 61827 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:27.831 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:05:28.090 00:05:28.090 real 0m4.085s 00:05:28.090 user 0m10.528s 00:05:28.090 sys 0m1.077s 00:05:28.090 ************************************ 00:05:28.090 END TEST nvmf_abort 00:05:28.090 ************************************ 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:28.090 ************************************ 00:05:28.090 START TEST nvmf_ns_hotplug_stress 00:05:28.090 ************************************ 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:28.090 * Looking for test storage... 00:05:28.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.090 --rc genhtml_branch_coverage=1 00:05:28.090 --rc genhtml_function_coverage=1 00:05:28.090 --rc genhtml_legend=1 00:05:28.090 --rc geninfo_all_blocks=1 00:05:28.090 --rc geninfo_unexecuted_blocks=1 00:05:28.090 00:05:28.090 ' 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.090 --rc genhtml_branch_coverage=1 00:05:28.090 --rc genhtml_function_coverage=1 00:05:28.090 --rc genhtml_legend=1 00:05:28.090 --rc geninfo_all_blocks=1 00:05:28.090 --rc geninfo_unexecuted_blocks=1 00:05:28.090 00:05:28.090 ' 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.090 --rc genhtml_branch_coverage=1 00:05:28.090 --rc genhtml_function_coverage=1 00:05:28.090 --rc genhtml_legend=1 00:05:28.090 --rc geninfo_all_blocks=1 00:05:28.090 --rc geninfo_unexecuted_blocks=1 00:05:28.090 00:05:28.090 ' 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.090 --rc genhtml_branch_coverage=1 00:05:28.090 --rc genhtml_function_coverage=1 00:05:28.090 --rc genhtml_legend=1 00:05:28.090 --rc geninfo_all_blocks=1 00:05:28.090 --rc geninfo_unexecuted_blocks=1 00:05:28.090 00:05:28.090 ' 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:28.090 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:28.091 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.091 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.091 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.091 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.091 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.091 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.091 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.091 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.091 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.091 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:28.349 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:05:28.349 Cannot find device "nvmf_init_br" 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:05:28.349 12:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:05:28.349 Cannot find device "nvmf_init_br2" 00:05:28.349 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:05:28.349 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:05:28.349 Cannot find device "nvmf_tgt_br" 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:05:28.350 Cannot find device "nvmf_tgt_br2" 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:05:28.350 Cannot find device "nvmf_init_br" 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:05:28.350 Cannot find device "nvmf_init_br2" 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:05:28.350 Cannot find device "nvmf_tgt_br" 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:05:28.350 Cannot find device "nvmf_tgt_br2" 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:05:28.350 Cannot find device "nvmf_br" 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:05:28.350 Cannot find device "nvmf_init_if" 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:05:28.350 Cannot find device "nvmf_init_if2" 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:05:28.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:05:28.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:05:28.350 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:05:28.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:05:28.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:05:28.608 00:05:28.608 --- 10.0.0.3 ping statistics --- 00:05:28.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.608 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:05:28.608 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:05:28.608 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:05:28.608 00:05:28.608 --- 10.0.0.4 ping statistics --- 00:05:28.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.608 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:05:28.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:28.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:05:28.608 00:05:28.608 --- 10.0.0.1 ping statistics --- 00:05:28.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.608 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:05:28.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:28.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:05:28.608 00:05:28.608 --- 10.0.0.2 ping statistics --- 00:05:28.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.608 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=62104 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:28.608 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 62104 00:05:28.609 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 62104 ']' 00:05:28.609 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.609 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.609 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.609 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.609 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.609 [2024-12-05 12:08:59.420270] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:05:28.609 [2024-12-05 12:08:59.420370] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:28.865 [2024-12-05 12:08:59.570426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.865 [2024-12-05 12:08:59.623149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:28.865 [2024-12-05 12:08:59.623684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:28.866 [2024-12-05 12:08:59.623964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.866 [2024-12-05 12:08:59.624181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.866 [2024-12-05 12:08:59.624412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:28.866 [2024-12-05 12:08:59.625794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.866 [2024-12-05 12:08:59.625874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.866 [2024-12-05 12:08:59.625880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.122 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.122 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:29.122 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:29.122 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.122 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:29.122 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:29.122 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:29.122 12:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:29.378 [2024-12-05 12:09:00.077163] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.378 12:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:29.636 12:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:05:29.892 [2024-12-05 12:09:00.697692] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:05:29.892 12:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:05:30.149 12:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:30.406 Malloc0 00:05:30.406 12:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:30.971 Delay0 00:05:30.971 12:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.971 12:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:31.228 NULL1 00:05:31.486 12:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:31.744 12:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:31.744 12:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62223 00:05:31.744 12:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:31.744 12:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.119 Read completed with error (sct=0, sc=11) 00:05:33.119 12:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.119 12:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:33.119 12:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:33.377 true 00:05:33.377 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:33.377 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.311 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.311 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:34.311 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:34.874 true 00:05:34.874 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:34.874 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.874 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.130 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:35.130 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:35.388 true 00:05:35.388 12:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:35.388 12:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.646 12:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.212 12:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:36.212 12:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:36.212 true 00:05:36.212 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:36.212 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.152 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.410 12:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:37.410 12:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:37.668 true 00:05:37.668 12:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:37.668 12:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.928 12:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.187 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:38.187 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:38.445 true 00:05:38.445 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:38.445 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.012 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.012 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:39.012 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:39.269 true 00:05:39.269 12:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:39.269 12:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.204 12:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.204 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.461 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:40.461 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:40.718 true 00:05:40.718 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:40.718 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.977 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.236 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:41.236 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:41.495 true 00:05:41.495 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:41.495 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.753 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.013 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:42.013 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:42.272 true 00:05:42.272 12:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:42.272 12:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.208 12:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.466 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:43.466 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:43.725 true 00:05:43.725 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:43.725 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.984 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.243 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:44.243 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:44.500 true 00:05:44.500 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:44.500 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.757 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.013 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:45.013 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:45.270 true 00:05:45.270 12:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:45.270 12:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.203 12:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.467 12:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:46.467 12:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:46.732 true 00:05:46.732 12:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:46.732 12:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.991 12:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.337 12:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:47.337 12:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:47.596 true 00:05:47.596 12:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:47.596 12:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.855 12:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.114 12:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:48.114 12:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:48.373 true 00:05:48.373 12:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:48.373 12:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.310 12:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.569 12:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:49.569 12:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:49.829 true 00:05:49.829 12:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:49.829 12:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.089 12:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.348 12:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:50.348 12:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:50.607 true 00:05:50.607 12:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:50.607 12:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.865 12:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.123 12:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:51.123 12:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:51.380 true 00:05:51.380 12:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:51.380 12:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.312 12:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.571 12:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:52.571 12:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:52.828 true 00:05:52.828 12:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:52.828 12:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.086 12:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.342 12:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:53.342 12:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:53.599 true 00:05:53.599 12:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:53.599 12:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.163 12:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.420 12:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:54.420 12:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:54.677 true 00:05:54.677 12:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:54.677 12:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.935 12:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.192 12:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:55.192 12:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:55.449 true 00:05:55.449 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:55.449 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.386 12:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.646 12:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:56.646 12:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:56.908 true 00:05:56.908 12:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:56.908 12:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.168 12:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.428 12:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:57.428 12:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:57.687 true 00:05:57.687 12:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:57.687 12:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.946 12:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.204 12:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:58.204 12:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:58.463 true 00:05:58.463 12:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:58.463 12:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.397 12:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.656 12:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:59.656 12:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:59.914 true 00:05:59.914 12:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:05:59.914 12:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.173 12:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.432 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:00.432 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:00.690 true 00:06:00.690 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:06:00.690 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.948 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.207 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:01.207 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:01.544 true 00:06:01.544 12:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:06:01.544 12:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.494 Initializing NVMe Controllers 00:06:02.494 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:06:02.494 Controller IO queue size 128, less than required. 00:06:02.494 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:02.494 Controller IO queue size 128, less than required. 00:06:02.494 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:02.494 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:02.494 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:02.494 Initialization complete. Launching workers. 00:06:02.494 ======================================================== 00:06:02.494 Latency(us) 00:06:02.494 Device Information : IOPS MiB/s Average min max 00:06:02.494 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 347.07 0.17 159804.67 3652.35 1026183.02 00:06:02.494 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9088.70 4.44 14083.16 3411.02 593104.87 00:06:02.494 ======================================================== 00:06:02.494 Total : 9435.77 4.61 19443.09 3411.02 1026183.02 00:06:02.494 00:06:02.494 12:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.494 12:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:02.494 12:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:02.753 true 00:06:02.753 12:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62223 00:06:02.753 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62223) - No such process 00:06:02.753 12:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62223 00:06:02.753 12:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.011 12:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.270 12:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:03.270 12:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:03.270 12:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:03.270 12:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.270 12:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:03.528 null0 00:06:03.528 12:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.528 12:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.528 12:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:03.786 null1 00:06:03.786 12:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.786 12:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.786 12:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:04.044 null2 00:06:04.044 12:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.044 12:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.044 12:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:04.303 null3 00:06:04.303 12:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.303 12:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.303 12:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:04.303 null4 00:06:04.561 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.561 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.561 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:04.819 null5 00:06:04.819 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:04.819 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:04.819 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:05.077 null6 00:06:05.077 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.077 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.077 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:05.336 null7 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:05.336 12:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63297 63298 63300 63303 63306 63307 63309 63311 00:06:05.595 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.595 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.595 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.596 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.596 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.596 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.596 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.596 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.854 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.855 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.855 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.855 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.113 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.113 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.113 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.113 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.113 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.113 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.113 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.113 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.113 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.113 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.113 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.113 12:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.372 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.631 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.889 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.148 12:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.405 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.405 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.405 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.405 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.405 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.405 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.405 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.405 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.405 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.405 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.663 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.921 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.921 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.921 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.921 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.921 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.921 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.921 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.179 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:08.179 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.179 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.179 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:08.437 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.437 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.437 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:08.437 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.437 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.437 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.437 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:08.437 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:08.437 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:08.437 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:08.437 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:08.437 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.719 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:08.993 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.251 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.251 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.251 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.251 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.251 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.251 12:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.251 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.251 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.252 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.252 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.510 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.510 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.510 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.510 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.510 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.510 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.510 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.510 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.510 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.511 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.511 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.511 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.511 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.511 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.511 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.511 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.511 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.511 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.511 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.769 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.769 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.769 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.769 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.769 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.769 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.769 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.769 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.769 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.028 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.286 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.286 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.286 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.286 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.286 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.286 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.287 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.287 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.287 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.287 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.545 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.804 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.063 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.063 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.063 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.063 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.063 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.063 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.063 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.063 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.063 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.063 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.063 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.063 12:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:11.321 rmmod nvme_tcp 00:06:11.321 rmmod nvme_fabrics 00:06:11.321 rmmod nvme_keyring 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:11.321 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:11.322 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:11.322 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 62104 ']' 00:06:11.322 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 62104 00:06:11.322 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 62104 ']' 00:06:11.322 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 62104 00:06:11.322 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:11.322 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.322 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62104 00:06:11.580 killing process with pid 62104 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62104' 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 62104 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 62104 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:11.580 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:06:11.840 00:06:11.840 real 0m43.902s 00:06:11.840 user 3m34.000s 00:06:11.840 sys 0m13.505s 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:11.840 ************************************ 00:06:11.840 END TEST nvmf_ns_hotplug_stress 00:06:11.840 ************************************ 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.840 12:09:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:12.101 ************************************ 00:06:12.101 START TEST nvmf_delete_subsystem 00:06:12.101 ************************************ 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:12.101 * Looking for test storage... 00:06:12.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.101 --rc genhtml_branch_coverage=1 00:06:12.101 --rc genhtml_function_coverage=1 00:06:12.101 --rc genhtml_legend=1 00:06:12.101 --rc geninfo_all_blocks=1 00:06:12.101 --rc geninfo_unexecuted_blocks=1 00:06:12.101 00:06:12.101 ' 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.101 --rc genhtml_branch_coverage=1 00:06:12.101 --rc genhtml_function_coverage=1 00:06:12.101 --rc genhtml_legend=1 00:06:12.101 --rc geninfo_all_blocks=1 00:06:12.101 --rc geninfo_unexecuted_blocks=1 00:06:12.101 00:06:12.101 ' 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.101 --rc genhtml_branch_coverage=1 00:06:12.101 --rc genhtml_function_coverage=1 00:06:12.101 --rc genhtml_legend=1 00:06:12.101 --rc geninfo_all_blocks=1 00:06:12.101 --rc geninfo_unexecuted_blocks=1 00:06:12.101 00:06:12.101 ' 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.101 --rc genhtml_branch_coverage=1 00:06:12.101 --rc genhtml_function_coverage=1 00:06:12.101 --rc genhtml_legend=1 00:06:12.101 --rc geninfo_all_blocks=1 00:06:12.101 --rc geninfo_unexecuted_blocks=1 00:06:12.101 00:06:12.101 ' 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.101 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.102 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:12.102 Cannot find device "nvmf_init_br" 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:12.102 Cannot find device "nvmf_init_br2" 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:12.102 Cannot find device "nvmf_tgt_br" 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:12.102 Cannot find device "nvmf_tgt_br2" 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:12.102 Cannot find device "nvmf_init_br" 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:06:12.102 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:12.361 Cannot find device "nvmf_init_br2" 00:06:12.361 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:06:12.361 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:12.361 Cannot find device "nvmf_tgt_br" 00:06:12.361 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:06:12.361 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:12.361 Cannot find device "nvmf_tgt_br2" 00:06:12.361 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:06:12.361 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:12.361 Cannot find device "nvmf_br" 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:12.361 Cannot find device "nvmf_init_if" 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:12.361 Cannot find device "nvmf_init_if2" 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:12.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:12.361 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:12.361 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:12.362 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:12.620 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:12.620 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:06:12.620 00:06:12.620 --- 10.0.0.3 ping statistics --- 00:06:12.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.620 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:12.620 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:12.620 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:06:12.620 00:06:12.620 --- 10.0.0.4 ping statistics --- 00:06:12.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.620 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:12.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:12.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:06:12.620 00:06:12.620 --- 10.0.0.1 ping statistics --- 00:06:12.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.620 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:12.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:12.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:06:12.620 00:06:12.620 --- 10.0.0.2 ping statistics --- 00:06:12.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.620 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=64686 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 64686 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 64686 ']' 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.620 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.620 [2024-12-05 12:09:43.341146] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:12.620 [2024-12-05 12:09:43.341232] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:12.878 [2024-12-05 12:09:43.494997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.878 [2024-12-05 12:09:43.556828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:12.878 [2024-12-05 12:09:43.557144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:12.878 [2024-12-05 12:09:43.557355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.878 [2024-12-05 12:09:43.557564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.878 [2024-12-05 12:09:43.557757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:12.878 [2024-12-05 12:09:43.559215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.878 [2024-12-05 12:09:43.559228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.812 [2024-12-05 12:09:44.368449] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.812 [2024-12-05 12:09:44.384641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.812 NULL1 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.812 Delay0 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=64737 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:13.812 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:13.812 [2024-12-05 12:09:44.589314] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:15.764 12:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:15.764 12:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.764 12:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 [2024-12-05 12:09:46.624143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205b7e0 is same with the state(6) to be set 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 starting I/O failed: -6 00:06:15.764 [2024-12-05 12:09:46.625519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4de000d4d0 is same with the state(6) to be set 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Write completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.764 Read completed with error (sct=0, sc=8) 00:06:15.765 Read completed with error (sct=0, sc=8) 00:06:15.765 Write completed with error (sct=0, sc=8) 00:06:15.765 Read completed with error (sct=0, sc=8) 00:06:15.765 Read completed with error (sct=0, sc=8) 00:06:15.765 Read completed with error (sct=0, sc=8) 00:06:15.765 Read completed with error (sct=0, sc=8) 00:06:15.765 Write completed with error (sct=0, sc=8) 00:06:15.765 Read completed with error (sct=0, sc=8) 00:06:17.140 [2024-12-05 12:09:47.602935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204faa0 is same with the state(6) to be set 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 [2024-12-05 12:09:47.623780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4de000d020 is same with the state(6) to be set 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 [2024-12-05 12:09:47.624442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4de000d800 is same with the state(6) to be set 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 [2024-12-05 12:09:47.624975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205aa50 is same with the state(6) to be set 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Write completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 Read completed with error (sct=0, sc=8) 00:06:17.140 [2024-12-05 12:09:47.626167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205dea0 is same with the state(6) to be set 00:06:17.140 Initializing NVMe Controllers 00:06:17.140 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:06:17.140 Controller IO queue size 128, less than required. 00:06:17.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:17.140 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:17.140 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:17.140 Initialization complete. Launching workers. 00:06:17.140 ======================================================== 00:06:17.140 Latency(us) 00:06:17.140 Device Information : IOPS MiB/s Average min max 00:06:17.140 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.40 0.08 887370.62 479.09 1011084.19 00:06:17.140 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.97 0.08 910573.67 342.38 1012629.68 00:06:17.140 ======================================================== 00:06:17.140 Total : 336.37 0.16 898612.28 342.38 1012629.68 00:06:17.140 00:06:17.140 [2024-12-05 12:09:47.626735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204faa0 (9): Bad file descriptor 00:06:17.140 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:17.140 12:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.140 12:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:17.140 12:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64737 00:06:17.140 12:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64737 00:06:17.400 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (64737) - No such process 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 64737 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 64737 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 64737 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.400 [2024-12-05 12:09:48.153436] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=64788 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64788 00:06:17.400 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:17.659 [2024-12-05 12:09:48.342002] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:17.918 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:17.918 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64788 00:06:17.918 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.553 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:18.553 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64788 00:06:18.553 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.120 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.120 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64788 00:06:19.120 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.379 12:09:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.379 12:09:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64788 00:06:19.379 12:09:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:19.946 12:09:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.946 12:09:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64788 00:06:19.946 12:09:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.513 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.513 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64788 00:06:20.513 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.771 Initializing NVMe Controllers 00:06:20.771 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:06:20.771 Controller IO queue size 128, less than required. 00:06:20.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:20.771 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:20.771 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:20.771 Initialization complete. Launching workers. 00:06:20.771 ======================================================== 00:06:20.772 Latency(us) 00:06:20.772 Device Information : IOPS MiB/s Average min max 00:06:20.772 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002941.76 1000129.42 1009253.29 00:06:20.772 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004818.65 1000156.74 1041603.27 00:06:20.772 ======================================================== 00:06:20.772 Total : 256.00 0.12 1003880.20 1000129.42 1041603.27 00:06:20.772 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64788 00:06:21.031 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (64788) - No such process 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 64788 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:21.031 rmmod nvme_tcp 00:06:21.031 rmmod nvme_fabrics 00:06:21.031 rmmod nvme_keyring 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 64686 ']' 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 64686 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 64686 ']' 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 64686 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64686 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64686' 00:06:21.031 killing process with pid 64686 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 64686 00:06:21.031 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 64686 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:21.290 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:06:21.549 00:06:21.549 real 0m9.587s 00:06:21.549 user 0m28.886s 00:06:21.549 sys 0m1.654s 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.549 ************************************ 00:06:21.549 END TEST nvmf_delete_subsystem 00:06:21.549 ************************************ 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:21.549 ************************************ 00:06:21.549 START TEST nvmf_host_management 00:06:21.549 ************************************ 00:06:21.549 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:21.808 * Looking for test storage... 00:06:21.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.809 --rc genhtml_branch_coverage=1 00:06:21.809 --rc genhtml_function_coverage=1 00:06:21.809 --rc genhtml_legend=1 00:06:21.809 --rc geninfo_all_blocks=1 00:06:21.809 --rc geninfo_unexecuted_blocks=1 00:06:21.809 00:06:21.809 ' 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.809 --rc genhtml_branch_coverage=1 00:06:21.809 --rc genhtml_function_coverage=1 00:06:21.809 --rc genhtml_legend=1 00:06:21.809 --rc geninfo_all_blocks=1 00:06:21.809 --rc geninfo_unexecuted_blocks=1 00:06:21.809 00:06:21.809 ' 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.809 --rc genhtml_branch_coverage=1 00:06:21.809 --rc genhtml_function_coverage=1 00:06:21.809 --rc genhtml_legend=1 00:06:21.809 --rc geninfo_all_blocks=1 00:06:21.809 --rc geninfo_unexecuted_blocks=1 00:06:21.809 00:06:21.809 ' 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.809 --rc genhtml_branch_coverage=1 00:06:21.809 --rc genhtml_function_coverage=1 00:06:21.809 --rc genhtml_legend=1 00:06:21.809 --rc geninfo_all_blocks=1 00:06:21.809 --rc geninfo_unexecuted_blocks=1 00:06:21.809 00:06:21.809 ' 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:21.809 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:21.809 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:21.810 Cannot find device "nvmf_init_br" 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:21.810 Cannot find device "nvmf_init_br2" 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:21.810 Cannot find device "nvmf_tgt_br" 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:21.810 Cannot find device "nvmf_tgt_br2" 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:21.810 Cannot find device "nvmf_init_br" 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:21.810 Cannot find device "nvmf_init_br2" 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:21.810 Cannot find device "nvmf_tgt_br" 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:21.810 Cannot find device "nvmf_tgt_br2" 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:21.810 Cannot find device "nvmf_br" 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:21.810 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:22.072 Cannot find device "nvmf_init_if" 00:06:22.072 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:22.072 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:22.072 Cannot find device "nvmf_init_if2" 00:06:22.072 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:22.072 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:22.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:22.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:22.073 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:22.073 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:06:22.073 00:06:22.073 --- 10.0.0.3 ping statistics --- 00:06:22.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.073 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:22.073 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:22.073 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:06:22.073 00:06:22.073 --- 10.0.0.4 ping statistics --- 00:06:22.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.073 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:22.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:06:22.073 00:06:22.073 --- 10.0.0.1 ping statistics --- 00:06:22.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.073 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:22.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:06:22.073 00:06:22.073 --- 10.0.0.2 ping statistics --- 00:06:22.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.073 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65078 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65078 00:06:22.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65078 ']' 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.073 12:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.333 [2024-12-05 12:09:53.002825] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:22.333 [2024-12-05 12:09:53.002916] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.333 [2024-12-05 12:09:53.149069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.333 [2024-12-05 12:09:53.198883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.333 [2024-12-05 12:09:53.199273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.333 [2024-12-05 12:09:53.199322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.333 [2024-12-05 12:09:53.199333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.333 [2024-12-05 12:09:53.199340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.333 [2024-12-05 12:09:53.200536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.591 [2024-12-05 12:09:53.201153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.592 [2024-12-05 12:09:53.201336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:22.592 [2024-12-05 12:09:53.201346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.592 [2024-12-05 12:09:53.379844] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.592 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.592 Malloc0 00:06:22.592 [2024-12-05 12:09:53.455945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65137 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65137 /var/tmp/bdevperf.sock 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65137 ']' 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:22.850 { 00:06:22.850 "params": { 00:06:22.850 "name": "Nvme$subsystem", 00:06:22.850 "trtype": "$TEST_TRANSPORT", 00:06:22.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:22.850 "adrfam": "ipv4", 00:06:22.850 "trsvcid": "$NVMF_PORT", 00:06:22.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:22.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:22.850 "hdgst": ${hdgst:-false}, 00:06:22.850 "ddgst": ${ddgst:-false} 00:06:22.850 }, 00:06:22.850 "method": "bdev_nvme_attach_controller" 00:06:22.850 } 00:06:22.850 EOF 00:06:22.850 )") 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:22.850 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:22.850 "params": { 00:06:22.850 "name": "Nvme0", 00:06:22.850 "trtype": "tcp", 00:06:22.850 "traddr": "10.0.0.3", 00:06:22.850 "adrfam": "ipv4", 00:06:22.850 "trsvcid": "4420", 00:06:22.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:22.850 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:22.850 "hdgst": false, 00:06:22.851 "ddgst": false 00:06:22.851 }, 00:06:22.851 "method": "bdev_nvme_attach_controller" 00:06:22.851 }' 00:06:22.851 [2024-12-05 12:09:53.565967] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:22.851 [2024-12-05 12:09:53.566212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65137 ] 00:06:22.851 [2024-12-05 12:09:53.711113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.109 [2024-12-05 12:09:53.761632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.109 Running I/O for 10 seconds... 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1027 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:24.045 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:24.046 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:24.046 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:24.046 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.046 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.046 [2024-12-05 12:09:54.697027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1de0 is same with the state(6) to be set 00:06:24.046 [2024-12-05 12:09:54.697077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc1de0 is same with the state(6) to be set 00:06:24.046 [2024-12-05 12:09:54.701350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.046 [2024-12-05 12:09:54.701390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.701405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.046 [2024-12-05 12:09:54.701415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.701425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.046 [2024-12-05 12:09:54.701434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.701444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:24.046 [2024-12-05 12:09:54.701453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.701463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2216660 is same with the state(6) to be set 00:06:24.046 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.046 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:24.046 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.046 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.046 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.046 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:24.046 [2024-12-05 12:09:54.712404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2216660 (9): Bad file descriptor 00:06:24.046 [2024-12-05 12:09:54.712496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.712985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.712994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.713006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.713015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.713026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.713035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.713047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.713056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.713067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.713076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.713087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.046 [2024-12-05 12:09:54.713095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.046 [2024-12-05 12:09:54.713106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.713818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:24.047 [2024-12-05 12:09:54.713826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:24.047 [2024-12-05 12:09:54.715020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:24.047 task offset: 16384 on job bdev=Nvme0n1 fails 00:06:24.047 00:06:24.047 Latency(us) 00:06:24.047 [2024-12-05T12:09:54.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:24.047 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:24.047 Job: Nvme0n1 ended in about 0.76 seconds with error 00:06:24.047 Verification LBA range: start 0x0 length 0x400 00:06:24.047 Nvme0n1 : 0.76 1509.67 94.35 83.87 0.00 39225.86 1846.92 36700.16 00:06:24.047 [2024-12-05T12:09:54.916Z] =================================================================================================================== 00:06:24.047 [2024-12-05T12:09:54.916Z] Total : 1509.67 94.35 83.87 0.00 39225.86 1846.92 36700.16 00:06:24.047 [2024-12-05 12:09:54.717010] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.047 [2024-12-05 12:09:54.728454] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:24.981 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65137 00:06:24.981 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65137) - No such process 00:06:24.981 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:24.981 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:24.981 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:24.981 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:24.981 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:24.981 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:24.981 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:24.981 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:24.981 { 00:06:24.981 "params": { 00:06:24.981 "name": "Nvme$subsystem", 00:06:24.981 "trtype": "$TEST_TRANSPORT", 00:06:24.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:24.981 "adrfam": "ipv4", 00:06:24.981 "trsvcid": "$NVMF_PORT", 00:06:24.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:24.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:24.982 "hdgst": ${hdgst:-false}, 00:06:24.982 "ddgst": ${ddgst:-false} 00:06:24.982 }, 00:06:24.982 "method": "bdev_nvme_attach_controller" 00:06:24.982 } 00:06:24.982 EOF 00:06:24.982 )") 00:06:24.982 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:24.982 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:24.982 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:24.982 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:24.982 "params": { 00:06:24.982 "name": "Nvme0", 00:06:24.982 "trtype": "tcp", 00:06:24.982 "traddr": "10.0.0.3", 00:06:24.982 "adrfam": "ipv4", 00:06:24.982 "trsvcid": "4420", 00:06:24.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:24.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:24.982 "hdgst": false, 00:06:24.982 "ddgst": false 00:06:24.982 }, 00:06:24.982 "method": "bdev_nvme_attach_controller" 00:06:24.982 }' 00:06:24.982 [2024-12-05 12:09:55.766022] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:24.982 [2024-12-05 12:09:55.766111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65187 ] 00:06:25.239 [2024-12-05 12:09:55.909636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.239 [2024-12-05 12:09:55.952823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.496 Running I/O for 1 seconds... 00:06:26.431 1664.00 IOPS, 104.00 MiB/s 00:06:26.431 Latency(us) 00:06:26.431 [2024-12-05T12:09:57.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:26.431 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:26.431 Verification LBA range: start 0x0 length 0x400 00:06:26.431 Nvme0n1 : 1.02 1688.93 105.56 0.00 0.00 37194.76 5004.57 32648.84 00:06:26.431 [2024-12-05T12:09:57.300Z] =================================================================================================================== 00:06:26.431 [2024-12-05T12:09:57.300Z] Total : 1688.93 105.56 0.00 0.00 37194.76 5004.57 32648.84 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:26.691 rmmod nvme_tcp 00:06:26.691 rmmod nvme_fabrics 00:06:26.691 rmmod nvme_keyring 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65078 ']' 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65078 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 65078 ']' 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 65078 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65078 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:26.691 killing process with pid 65078 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65078' 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 65078 00:06:26.691 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 65078 00:06:26.951 [2024-12-05 12:09:57.685934] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:26.951 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:27.213 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:27.213 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:27.213 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:27.213 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:27.213 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:27.213 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.213 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.213 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.213 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:06:27.213 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:27.213 00:06:27.213 real 0m5.600s 00:06:27.213 user 0m20.839s 00:06:27.213 sys 0m1.477s 00:06:27.213 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.213 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.213 ************************************ 00:06:27.213 END TEST nvmf_host_management 00:06:27.213 ************************************ 00:06:27.213 12:09:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:27.213 12:09:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:27.213 12:09:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.213 12:09:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:27.213 ************************************ 00:06:27.213 START TEST nvmf_lvol 00:06:27.213 ************************************ 00:06:27.213 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:27.477 * Looking for test storage... 00:06:27.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.478 --rc genhtml_branch_coverage=1 00:06:27.478 --rc genhtml_function_coverage=1 00:06:27.478 --rc genhtml_legend=1 00:06:27.478 --rc geninfo_all_blocks=1 00:06:27.478 --rc geninfo_unexecuted_blocks=1 00:06:27.478 00:06:27.478 ' 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.478 --rc genhtml_branch_coverage=1 00:06:27.478 --rc genhtml_function_coverage=1 00:06:27.478 --rc genhtml_legend=1 00:06:27.478 --rc geninfo_all_blocks=1 00:06:27.478 --rc geninfo_unexecuted_blocks=1 00:06:27.478 00:06:27.478 ' 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.478 --rc genhtml_branch_coverage=1 00:06:27.478 --rc genhtml_function_coverage=1 00:06:27.478 --rc genhtml_legend=1 00:06:27.478 --rc geninfo_all_blocks=1 00:06:27.478 --rc geninfo_unexecuted_blocks=1 00:06:27.478 00:06:27.478 ' 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.478 --rc genhtml_branch_coverage=1 00:06:27.478 --rc genhtml_function_coverage=1 00:06:27.478 --rc genhtml_legend=1 00:06:27.478 --rc geninfo_all_blocks=1 00:06:27.478 --rc geninfo_unexecuted_blocks=1 00:06:27.478 00:06:27.478 ' 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.478 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.478 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:27.479 Cannot find device "nvmf_init_br" 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:27.479 Cannot find device "nvmf_init_br2" 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:27.479 Cannot find device "nvmf_tgt_br" 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:27.479 Cannot find device "nvmf_tgt_br2" 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:27.479 Cannot find device "nvmf_init_br" 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:27.479 Cannot find device "nvmf_init_br2" 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:27.479 Cannot find device "nvmf_tgt_br" 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:27.479 Cannot find device "nvmf_tgt_br2" 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:27.479 Cannot find device "nvmf_br" 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:06:27.479 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:27.739 Cannot find device "nvmf_init_if" 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:27.739 Cannot find device "nvmf_init_if2" 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:27.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:27.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:27.739 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:27.739 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:06:27.739 00:06:27.739 --- 10.0.0.3 ping statistics --- 00:06:27.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.739 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:27.739 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:27.739 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:06:27.739 00:06:27.739 --- 10.0.0.4 ping statistics --- 00:06:27.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.739 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:27.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:06:27.739 00:06:27.739 --- 10.0.0.1 ping statistics --- 00:06:27.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.739 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:27.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:06:27.739 00:06:27.739 --- 10.0.0.2 ping statistics --- 00:06:27.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.739 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:27.739 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.740 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:27.999 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65455 00:06:27.999 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65455 00:06:27.999 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:27.999 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65455 ']' 00:06:27.999 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.999 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.999 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.999 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.999 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:27.999 [2024-12-05 12:09:58.672713] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:27.999 [2024-12-05 12:09:58.672802] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.999 [2024-12-05 12:09:58.823948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.258 [2024-12-05 12:09:58.876325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:28.258 [2024-12-05 12:09:58.876396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:28.259 [2024-12-05 12:09:58.876411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.259 [2024-12-05 12:09:58.876422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.259 [2024-12-05 12:09:58.876431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:28.259 [2024-12-05 12:09:58.877691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.259 [2024-12-05 12:09:58.877837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.259 [2024-12-05 12:09:58.877844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.826 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.826 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:28.826 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:28.826 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:28.826 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:28.826 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:28.826 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:29.084 [2024-12-05 12:09:59.877868] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.084 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:29.652 12:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:29.652 12:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:29.910 12:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:29.910 12:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:30.168 12:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:30.429 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=41ca4cf5-6e41-4bd0-a1d0-73b577684243 00:06:30.429 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 41ca4cf5-6e41-4bd0-a1d0-73b577684243 lvol 20 00:06:30.687 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7c501d3a-28cd-40ee-9d04-57fc206ac93d 00:06:30.687 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:30.945 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7c501d3a-28cd-40ee-9d04-57fc206ac93d 00:06:31.203 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:31.460 [2024-12-05 12:10:02.101553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:31.460 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:31.718 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65607 00:06:31.718 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:31.718 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:32.651 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 7c501d3a-28cd-40ee-9d04-57fc206ac93d MY_SNAPSHOT 00:06:32.909 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9fd757d4-dbb8-4193-8b0b-640d22032b6a 00:06:32.909 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 7c501d3a-28cd-40ee-9d04-57fc206ac93d 30 00:06:33.167 12:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 9fd757d4-dbb8-4193-8b0b-640d22032b6a MY_CLONE 00:06:33.425 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=54946edb-4e00-4548-8272-2f3a5a87acfe 00:06:33.425 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 54946edb-4e00-4548-8272-2f3a5a87acfe 00:06:34.359 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65607 00:06:42.477 Initializing NVMe Controllers 00:06:42.477 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:06:42.477 Controller IO queue size 128, less than required. 00:06:42.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:42.477 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:42.477 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:42.477 Initialization complete. Launching workers. 00:06:42.477 ======================================================== 00:06:42.477 Latency(us) 00:06:42.477 Device Information : IOPS MiB/s Average min max 00:06:42.477 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11582.00 45.24 11054.74 1434.27 78656.27 00:06:42.477 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11558.00 45.15 11078.69 2859.61 63476.85 00:06:42.477 ======================================================== 00:06:42.477 Total : 23140.00 90.39 11066.70 1434.27 78656.27 00:06:42.478 00:06:42.478 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:42.478 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7c501d3a-28cd-40ee-9d04-57fc206ac93d 00:06:42.478 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 41ca4cf5-6e41-4bd0-a1d0-73b577684243 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:42.735 rmmod nvme_tcp 00:06:42.735 rmmod nvme_fabrics 00:06:42.735 rmmod nvme_keyring 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65455 ']' 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65455 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65455 ']' 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65455 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65455 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.735 killing process with pid 65455 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65455' 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65455 00:06:42.735 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65455 00:06:42.993 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:42.993 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:42.993 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:42.993 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:42.993 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:42.993 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:42.993 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:42.993 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:42.993 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:42.993 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:42.993 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:42.993 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:43.251 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:43.251 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:43.251 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:43.251 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:43.251 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:43.251 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:43.251 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:43.251 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:43.251 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:43.251 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:43.251 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:43.251 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.251 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.251 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.251 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:06:43.251 00:06:43.251 real 0m16.036s 00:06:43.251 user 1m6.210s 00:06:43.251 sys 0m3.796s 00:06:43.251 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.251 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:43.251 ************************************ 00:06:43.251 END TEST nvmf_lvol 00:06:43.251 ************************************ 00:06:43.251 12:10:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:43.251 12:10:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.251 12:10:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.251 12:10:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:43.251 ************************************ 00:06:43.251 START TEST nvmf_lvs_grow 00:06:43.251 ************************************ 00:06:43.251 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:43.510 * Looking for test storage... 00:06:43.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.510 --rc genhtml_branch_coverage=1 00:06:43.510 --rc genhtml_function_coverage=1 00:06:43.510 --rc genhtml_legend=1 00:06:43.510 --rc geninfo_all_blocks=1 00:06:43.510 --rc geninfo_unexecuted_blocks=1 00:06:43.510 00:06:43.510 ' 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.510 --rc genhtml_branch_coverage=1 00:06:43.510 --rc genhtml_function_coverage=1 00:06:43.510 --rc genhtml_legend=1 00:06:43.510 --rc geninfo_all_blocks=1 00:06:43.510 --rc geninfo_unexecuted_blocks=1 00:06:43.510 00:06:43.510 ' 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.510 --rc genhtml_branch_coverage=1 00:06:43.510 --rc genhtml_function_coverage=1 00:06:43.510 --rc genhtml_legend=1 00:06:43.510 --rc geninfo_all_blocks=1 00:06:43.510 --rc geninfo_unexecuted_blocks=1 00:06:43.510 00:06:43.510 ' 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.510 --rc genhtml_branch_coverage=1 00:06:43.510 --rc genhtml_function_coverage=1 00:06:43.510 --rc genhtml_legend=1 00:06:43.510 --rc geninfo_all_blocks=1 00:06:43.510 --rc geninfo_unexecuted_blocks=1 00:06:43.510 00:06:43.510 ' 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:06:43.510 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.511 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:43.511 Cannot find device "nvmf_init_br" 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:43.511 Cannot find device "nvmf_init_br2" 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:43.511 Cannot find device "nvmf_tgt_br" 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:43.511 Cannot find device "nvmf_tgt_br2" 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:43.511 Cannot find device "nvmf_init_br" 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:43.511 Cannot find device "nvmf_init_br2" 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:43.511 Cannot find device "nvmf_tgt_br" 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:43.511 Cannot find device "nvmf_tgt_br2" 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:06:43.511 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:43.770 Cannot find device "nvmf_br" 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:43.770 Cannot find device "nvmf_init_if" 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:43.770 Cannot find device "nvmf_init_if2" 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:43.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:43.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:43.770 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:43.770 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:06:43.770 00:06:43.770 --- 10.0.0.3 ping statistics --- 00:06:43.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.770 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:06:43.770 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:43.770 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:43.770 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:06:43.770 00:06:43.771 --- 10.0.0.4 ping statistics --- 00:06:43.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.771 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:06:43.771 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:43.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:06:43.771 00:06:43.771 --- 10.0.0.1 ping statistics --- 00:06:43.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.771 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:06:43.771 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:43.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:06:43.771 00:06:43.771 --- 10.0.0.2 ping statistics --- 00:06:43.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.771 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:06:43.771 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.771 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:06:43.771 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:43.771 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.771 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:43.771 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:43.771 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.771 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:43.771 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66023 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66023 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 66023 ']' 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.029 12:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:44.029 [2024-12-05 12:10:14.725881] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:44.029 [2024-12-05 12:10:14.725971] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.029 [2024-12-05 12:10:14.876448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.287 [2024-12-05 12:10:14.931610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.287 [2024-12-05 12:10:14.931847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.287 [2024-12-05 12:10:14.931944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.287 [2024-12-05 12:10:14.932027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.287 [2024-12-05 12:10:14.932105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.287 [2024-12-05 12:10:14.932670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.287 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.287 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:44.287 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:44.287 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.287 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:44.287 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.287 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:44.545 [2024-12-05 12:10:15.376237] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.545 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:44.545 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.545 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.545 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:44.545 ************************************ 00:06:44.545 START TEST lvs_grow_clean 00:06:44.545 ************************************ 00:06:44.545 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:44.545 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:44.545 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:44.545 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:44.545 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:44.545 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:44.545 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:44.545 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:44.802 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:44.802 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:45.059 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:45.059 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:45.317 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=27c1d9a4-c8f6-496a-b110-cf9f705e1a4b 00:06:45.317 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27c1d9a4-c8f6-496a-b110-cf9f705e1a4b 00:06:45.317 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:45.574 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:45.574 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:45.574 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 27c1d9a4-c8f6-496a-b110-cf9f705e1a4b lvol 150 00:06:45.832 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9dc7e1d9-a186-4749-95a7-5207deea610b 00:06:45.832 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:06:45.832 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:46.090 [2024-12-05 12:10:16.840580] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:46.090 [2024-12-05 12:10:16.840712] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:46.090 true 00:06:46.090 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27c1d9a4-c8f6-496a-b110-cf9f705e1a4b 00:06:46.090 12:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:46.348 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:46.348 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:46.606 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9dc7e1d9-a186-4749-95a7-5207deea610b 00:06:46.864 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:47.122 [2024-12-05 12:10:17.905189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:47.122 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:47.379 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:47.379 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66176 00:06:47.379 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:47.379 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66176 /var/tmp/bdevperf.sock 00:06:47.379 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66176 ']' 00:06:47.379 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:47.379 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:47.379 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:47.379 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.380 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:47.380 [2024-12-05 12:10:18.208370] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:06:47.380 [2024-12-05 12:10:18.208472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66176 ] 00:06:47.638 [2024-12-05 12:10:18.355669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.638 [2024-12-05 12:10:18.423071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.580 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.580 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:48.580 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:48.838 Nvme0n1 00:06:48.838 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:49.097 [ 00:06:49.097 { 00:06:49.097 "aliases": [ 00:06:49.097 "9dc7e1d9-a186-4749-95a7-5207deea610b" 00:06:49.097 ], 00:06:49.097 "assigned_rate_limits": { 00:06:49.097 "r_mbytes_per_sec": 0, 00:06:49.097 "rw_ios_per_sec": 0, 00:06:49.097 "rw_mbytes_per_sec": 0, 00:06:49.097 "w_mbytes_per_sec": 0 00:06:49.097 }, 00:06:49.097 "block_size": 4096, 00:06:49.097 "claimed": false, 00:06:49.097 "driver_specific": { 00:06:49.097 "mp_policy": "active_passive", 00:06:49.097 "nvme": [ 00:06:49.097 { 00:06:49.097 "ctrlr_data": { 00:06:49.097 "ana_reporting": false, 00:06:49.097 "cntlid": 1, 00:06:49.097 "firmware_revision": "25.01", 00:06:49.097 "model_number": "SPDK bdev Controller", 00:06:49.097 "multi_ctrlr": true, 00:06:49.097 "oacs": { 00:06:49.097 "firmware": 0, 00:06:49.097 "format": 0, 00:06:49.097 "ns_manage": 0, 00:06:49.097 "security": 0 00:06:49.097 }, 00:06:49.097 "serial_number": "SPDK0", 00:06:49.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:49.097 "vendor_id": "0x8086" 00:06:49.097 }, 00:06:49.097 "ns_data": { 00:06:49.097 "can_share": true, 00:06:49.097 "id": 1 00:06:49.097 }, 00:06:49.097 "trid": { 00:06:49.097 "adrfam": "IPv4", 00:06:49.097 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:49.097 "traddr": "10.0.0.3", 00:06:49.097 "trsvcid": "4420", 00:06:49.097 "trtype": "TCP" 00:06:49.097 }, 00:06:49.097 "vs": { 00:06:49.097 "nvme_version": "1.3" 00:06:49.097 } 00:06:49.097 } 00:06:49.097 ] 00:06:49.097 }, 00:06:49.097 "memory_domains": [ 00:06:49.097 { 00:06:49.097 "dma_device_id": "system", 00:06:49.097 "dma_device_type": 1 00:06:49.097 } 00:06:49.097 ], 00:06:49.097 "name": "Nvme0n1", 00:06:49.097 "num_blocks": 38912, 00:06:49.097 "numa_id": -1, 00:06:49.097 "product_name": "NVMe disk", 00:06:49.097 "supported_io_types": { 00:06:49.097 "abort": true, 00:06:49.097 "compare": true, 00:06:49.097 "compare_and_write": true, 00:06:49.097 "copy": true, 00:06:49.097 "flush": true, 00:06:49.097 "get_zone_info": false, 00:06:49.097 "nvme_admin": true, 00:06:49.097 "nvme_io": true, 00:06:49.097 "nvme_io_md": false, 00:06:49.097 "nvme_iov_md": false, 00:06:49.097 "read": true, 00:06:49.097 "reset": true, 00:06:49.097 "seek_data": false, 00:06:49.097 "seek_hole": false, 00:06:49.097 "unmap": true, 00:06:49.097 "write": true, 00:06:49.097 "write_zeroes": true, 00:06:49.097 "zcopy": false, 00:06:49.097 "zone_append": false, 00:06:49.097 "zone_management": false 00:06:49.097 }, 00:06:49.097 "uuid": "9dc7e1d9-a186-4749-95a7-5207deea610b", 00:06:49.097 "zoned": false 00:06:49.097 } 00:06:49.097 ] 00:06:49.097 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66224 00:06:49.097 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:49.097 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:49.097 Running I/O for 10 seconds... 00:06:50.471 Latency(us) 00:06:50.471 [2024-12-05T12:10:21.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.471 Nvme0n1 : 1.00 7449.00 29.10 0.00 0.00 0.00 0.00 0.00 00:06:50.471 [2024-12-05T12:10:21.340Z] =================================================================================================================== 00:06:50.471 [2024-12-05T12:10:21.340Z] Total : 7449.00 29.10 0.00 0.00 0.00 0.00 0.00 00:06:50.471 00:06:51.038 12:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 27c1d9a4-c8f6-496a-b110-cf9f705e1a4b 00:06:51.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.297 Nvme0n1 : 2.00 7410.50 28.95 0.00 0.00 0.00 0.00 0.00 00:06:51.297 [2024-12-05T12:10:22.166Z] =================================================================================================================== 00:06:51.297 [2024-12-05T12:10:22.166Z] Total : 7410.50 28.95 0.00 0.00 0.00 0.00 0.00 00:06:51.297 00:06:51.297 true 00:06:51.297 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27c1d9a4-c8f6-496a-b110-cf9f705e1a4b 00:06:51.297 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:51.863 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:51.863 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:51.863 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66224 00:06:52.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.122 Nvme0n1 : 3.00 7356.67 28.74 0.00 0.00 0.00 0.00 0.00 00:06:52.122 [2024-12-05T12:10:22.991Z] =================================================================================================================== 00:06:52.122 [2024-12-05T12:10:22.991Z] Total : 7356.67 28.74 0.00 0.00 0.00 0.00 0.00 00:06:52.122 00:06:53.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.497 Nvme0n1 : 4.00 7295.75 28.50 0.00 0.00 0.00 0.00 0.00 00:06:53.497 [2024-12-05T12:10:24.366Z] =================================================================================================================== 00:06:53.497 [2024-12-05T12:10:24.366Z] Total : 7295.75 28.50 0.00 0.00 0.00 0.00 0.00 00:06:53.497 00:06:54.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.432 Nvme0n1 : 5.00 7138.60 27.89 0.00 0.00 0.00 0.00 0.00 00:06:54.432 [2024-12-05T12:10:25.301Z] =================================================================================================================== 00:06:54.432 [2024-12-05T12:10:25.301Z] Total : 7138.60 27.89 0.00 0.00 0.00 0.00 0.00 00:06:54.432 00:06:55.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.368 Nvme0n1 : 6.00 7134.17 27.87 0.00 0.00 0.00 0.00 0.00 00:06:55.368 [2024-12-05T12:10:26.237Z] =================================================================================================================== 00:06:55.368 [2024-12-05T12:10:26.237Z] Total : 7134.17 27.87 0.00 0.00 0.00 0.00 0.00 00:06:55.368 00:06:56.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.305 Nvme0n1 : 7.00 7102.00 27.74 0.00 0.00 0.00 0.00 0.00 00:06:56.305 [2024-12-05T12:10:27.174Z] =================================================================================================================== 00:06:56.305 [2024-12-05T12:10:27.174Z] Total : 7102.00 27.74 0.00 0.00 0.00 0.00 0.00 00:06:56.305 00:06:57.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.240 Nvme0n1 : 8.00 7079.62 27.65 0.00 0.00 0.00 0.00 0.00 00:06:57.240 [2024-12-05T12:10:28.109Z] =================================================================================================================== 00:06:57.240 [2024-12-05T12:10:28.109Z] Total : 7079.62 27.65 0.00 0.00 0.00 0.00 0.00 00:06:57.240 00:06:58.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.174 Nvme0n1 : 9.00 7076.22 27.64 0.00 0.00 0.00 0.00 0.00 00:06:58.174 [2024-12-05T12:10:29.043Z] =================================================================================================================== 00:06:58.174 [2024-12-05T12:10:29.043Z] Total : 7076.22 27.64 0.00 0.00 0.00 0.00 0.00 00:06:58.174 00:06:59.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.109 Nvme0n1 : 10.00 7061.70 27.58 0.00 0.00 0.00 0.00 0.00 00:06:59.109 [2024-12-05T12:10:29.978Z] =================================================================================================================== 00:06:59.109 [2024-12-05T12:10:29.978Z] Total : 7061.70 27.58 0.00 0.00 0.00 0.00 0.00 00:06:59.109 00:06:59.109 00:06:59.109 Latency(us) 00:06:59.109 [2024-12-05T12:10:29.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.109 Nvme0n1 : 10.01 7067.36 27.61 0.00 0.00 18099.44 8102.63 113913.48 00:06:59.109 [2024-12-05T12:10:29.978Z] =================================================================================================================== 00:06:59.109 [2024-12-05T12:10:29.978Z] Total : 7067.36 27.61 0.00 0.00 18099.44 8102.63 113913.48 00:06:59.109 { 00:06:59.109 "results": [ 00:06:59.109 { 00:06:59.109 "job": "Nvme0n1", 00:06:59.109 "core_mask": "0x2", 00:06:59.109 "workload": "randwrite", 00:06:59.109 "status": "finished", 00:06:59.109 "queue_depth": 128, 00:06:59.109 "io_size": 4096, 00:06:59.109 "runtime": 10.010109, 00:06:59.109 "iops": 7067.355610213635, 00:06:59.109 "mibps": 27.606857852397013, 00:06:59.109 "io_failed": 0, 00:06:59.109 "io_timeout": 0, 00:06:59.109 "avg_latency_us": 18099.436875538908, 00:06:59.109 "min_latency_us": 8102.632727272728, 00:06:59.109 "max_latency_us": 113913.48363636364 00:06:59.109 } 00:06:59.109 ], 00:06:59.109 "core_count": 1 00:06:59.109 } 00:06:59.109 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66176 00:06:59.109 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66176 ']' 00:06:59.109 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66176 00:06:59.109 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:59.109 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.367 12:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66176 00:06:59.367 killing process with pid 66176 00:06:59.367 Received shutdown signal, test time was about 10.000000 seconds 00:06:59.367 00:06:59.367 Latency(us) 00:06:59.367 [2024-12-05T12:10:30.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.368 [2024-12-05T12:10:30.237Z] =================================================================================================================== 00:06:59.368 [2024-12-05T12:10:30.237Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:59.368 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:59.368 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:59.368 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66176' 00:06:59.368 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66176 00:06:59.368 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66176 00:06:59.368 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:59.933 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:59.933 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:59.933 12:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27c1d9a4-c8f6-496a-b110-cf9f705e1a4b 00:07:00.191 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:00.191 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:00.191 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:00.448 [2024-12-05 12:10:31.233923] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:00.448 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27c1d9a4-c8f6-496a-b110-cf9f705e1a4b 00:07:00.448 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:00.448 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27c1d9a4-c8f6-496a-b110-cf9f705e1a4b 00:07:00.448 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.448 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.448 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.448 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.448 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.448 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.448 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.448 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:00.448 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27c1d9a4-c8f6-496a-b110-cf9f705e1a4b 00:07:00.705 2024/12/05 12:10:31 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:27c1d9a4-c8f6-496a-b110-cf9f705e1a4b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:07:00.705 request: 00:07:00.705 { 00:07:00.705 "method": "bdev_lvol_get_lvstores", 00:07:00.705 "params": { 00:07:00.705 "uuid": "27c1d9a4-c8f6-496a-b110-cf9f705e1a4b" 00:07:00.705 } 00:07:00.705 } 00:07:00.705 Got JSON-RPC error response 00:07:00.705 GoRPCClient: error on JSON-RPC call 00:07:00.705 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:00.705 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.705 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.705 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.705 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:00.971 aio_bdev 00:07:00.971 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9dc7e1d9-a186-4749-95a7-5207deea610b 00:07:00.971 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=9dc7e1d9-a186-4749-95a7-5207deea610b 00:07:00.971 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:00.971 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:00.971 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:00.971 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:00.971 12:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:01.245 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9dc7e1d9-a186-4749-95a7-5207deea610b -t 2000 00:07:01.504 [ 00:07:01.504 { 00:07:01.504 "aliases": [ 00:07:01.504 "lvs/lvol" 00:07:01.504 ], 00:07:01.504 "assigned_rate_limits": { 00:07:01.504 "r_mbytes_per_sec": 0, 00:07:01.504 "rw_ios_per_sec": 0, 00:07:01.504 "rw_mbytes_per_sec": 0, 00:07:01.504 "w_mbytes_per_sec": 0 00:07:01.504 }, 00:07:01.504 "block_size": 4096, 00:07:01.504 "claimed": false, 00:07:01.504 "driver_specific": { 00:07:01.504 "lvol": { 00:07:01.504 "base_bdev": "aio_bdev", 00:07:01.504 "clone": false, 00:07:01.504 "esnap_clone": false, 00:07:01.504 "lvol_store_uuid": "27c1d9a4-c8f6-496a-b110-cf9f705e1a4b", 00:07:01.504 "num_allocated_clusters": 38, 00:07:01.504 "snapshot": false, 00:07:01.504 "thin_provision": false 00:07:01.504 } 00:07:01.504 }, 00:07:01.504 "name": "9dc7e1d9-a186-4749-95a7-5207deea610b", 00:07:01.504 "num_blocks": 38912, 00:07:01.504 "product_name": "Logical Volume", 00:07:01.504 "supported_io_types": { 00:07:01.504 "abort": false, 00:07:01.504 "compare": false, 00:07:01.504 "compare_and_write": false, 00:07:01.504 "copy": false, 00:07:01.504 "flush": false, 00:07:01.504 "get_zone_info": false, 00:07:01.504 "nvme_admin": false, 00:07:01.504 "nvme_io": false, 00:07:01.504 "nvme_io_md": false, 00:07:01.504 "nvme_iov_md": false, 00:07:01.504 "read": true, 00:07:01.504 "reset": true, 00:07:01.504 "seek_data": true, 00:07:01.504 "seek_hole": true, 00:07:01.504 "unmap": true, 00:07:01.504 "write": true, 00:07:01.504 "write_zeroes": true, 00:07:01.504 "zcopy": false, 00:07:01.504 "zone_append": false, 00:07:01.504 "zone_management": false 00:07:01.504 }, 00:07:01.504 "uuid": "9dc7e1d9-a186-4749-95a7-5207deea610b", 00:07:01.504 "zoned": false 00:07:01.504 } 00:07:01.504 ] 00:07:01.504 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:01.504 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27c1d9a4-c8f6-496a-b110-cf9f705e1a4b 00:07:01.504 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:01.763 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:01.763 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27c1d9a4-c8f6-496a-b110-cf9f705e1a4b 00:07:01.763 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:02.022 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:02.022 12:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9dc7e1d9-a186-4749-95a7-5207deea610b 00:07:02.280 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27c1d9a4-c8f6-496a-b110-cf9f705e1a4b 00:07:02.539 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:02.798 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:03.366 00:07:03.366 real 0m18.542s 00:07:03.366 user 0m18.025s 00:07:03.366 sys 0m2.181s 00:07:03.366 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.366 12:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:03.366 ************************************ 00:07:03.366 END TEST lvs_grow_clean 00:07:03.366 ************************************ 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:03.366 ************************************ 00:07:03.366 START TEST lvs_grow_dirty 00:07:03.366 ************************************ 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:03.366 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:03.626 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:03.627 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:03.886 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:03.886 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:03.886 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:04.145 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:04.145 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:04.145 12:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0fdd31e3-2714-498d-a514-7e2d54177c71 lvol 150 00:07:04.404 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0b114737-c51f-4fd8-a614-8dab8716460a 00:07:04.404 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:04.404 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:04.664 [2024-12-05 12:10:35.351164] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:04.664 [2024-12-05 12:10:35.351259] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:04.664 true 00:07:04.664 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:04.664 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:04.922 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:04.922 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:05.180 12:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0b114737-c51f-4fd8-a614-8dab8716460a 00:07:05.438 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:05.696 [2024-12-05 12:10:36.371923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:05.696 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:05.955 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66621 00:07:05.955 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:05.955 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:05.955 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66621 /var/tmp/bdevperf.sock 00:07:05.955 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66621 ']' 00:07:05.955 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:05.955 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:05.956 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:05.956 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.956 12:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:05.956 [2024-12-05 12:10:36.718983] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:05.956 [2024-12-05 12:10:36.719073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66621 ] 00:07:06.215 [2024-12-05 12:10:36.867074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.215 [2024-12-05 12:10:36.932110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.154 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.154 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:07.154 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:07.154 Nvme0n1 00:07:07.154 12:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:07.413 [ 00:07:07.413 { 00:07:07.413 "aliases": [ 00:07:07.413 "0b114737-c51f-4fd8-a614-8dab8716460a" 00:07:07.413 ], 00:07:07.413 "assigned_rate_limits": { 00:07:07.413 "r_mbytes_per_sec": 0, 00:07:07.413 "rw_ios_per_sec": 0, 00:07:07.413 "rw_mbytes_per_sec": 0, 00:07:07.413 "w_mbytes_per_sec": 0 00:07:07.413 }, 00:07:07.413 "block_size": 4096, 00:07:07.413 "claimed": false, 00:07:07.413 "driver_specific": { 00:07:07.413 "mp_policy": "active_passive", 00:07:07.413 "nvme": [ 00:07:07.413 { 00:07:07.413 "ctrlr_data": { 00:07:07.413 "ana_reporting": false, 00:07:07.413 "cntlid": 1, 00:07:07.413 "firmware_revision": "25.01", 00:07:07.413 "model_number": "SPDK bdev Controller", 00:07:07.413 "multi_ctrlr": true, 00:07:07.413 "oacs": { 00:07:07.413 "firmware": 0, 00:07:07.413 "format": 0, 00:07:07.413 "ns_manage": 0, 00:07:07.413 "security": 0 00:07:07.413 }, 00:07:07.413 "serial_number": "SPDK0", 00:07:07.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:07.413 "vendor_id": "0x8086" 00:07:07.413 }, 00:07:07.413 "ns_data": { 00:07:07.413 "can_share": true, 00:07:07.413 "id": 1 00:07:07.413 }, 00:07:07.413 "trid": { 00:07:07.413 "adrfam": "IPv4", 00:07:07.413 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:07.413 "traddr": "10.0.0.3", 00:07:07.413 "trsvcid": "4420", 00:07:07.413 "trtype": "TCP" 00:07:07.413 }, 00:07:07.413 "vs": { 00:07:07.413 "nvme_version": "1.3" 00:07:07.413 } 00:07:07.413 } 00:07:07.413 ] 00:07:07.413 }, 00:07:07.413 "memory_domains": [ 00:07:07.413 { 00:07:07.413 "dma_device_id": "system", 00:07:07.413 "dma_device_type": 1 00:07:07.413 } 00:07:07.413 ], 00:07:07.413 "name": "Nvme0n1", 00:07:07.413 "num_blocks": 38912, 00:07:07.413 "numa_id": -1, 00:07:07.414 "product_name": "NVMe disk", 00:07:07.414 "supported_io_types": { 00:07:07.414 "abort": true, 00:07:07.414 "compare": true, 00:07:07.414 "compare_and_write": true, 00:07:07.414 "copy": true, 00:07:07.414 "flush": true, 00:07:07.414 "get_zone_info": false, 00:07:07.414 "nvme_admin": true, 00:07:07.414 "nvme_io": true, 00:07:07.414 "nvme_io_md": false, 00:07:07.414 "nvme_iov_md": false, 00:07:07.414 "read": true, 00:07:07.414 "reset": true, 00:07:07.414 "seek_data": false, 00:07:07.414 "seek_hole": false, 00:07:07.414 "unmap": true, 00:07:07.414 "write": true, 00:07:07.414 "write_zeroes": true, 00:07:07.414 "zcopy": false, 00:07:07.414 "zone_append": false, 00:07:07.414 "zone_management": false 00:07:07.414 }, 00:07:07.414 "uuid": "0b114737-c51f-4fd8-a614-8dab8716460a", 00:07:07.414 "zoned": false 00:07:07.414 } 00:07:07.414 ] 00:07:07.414 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66673 00:07:07.414 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:07.414 12:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:07.673 Running I/O for 10 seconds... 00:07:08.610 Latency(us) 00:07:08.610 [2024-12-05T12:10:39.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.610 Nvme0n1 : 1.00 7350.00 28.71 0.00 0.00 0.00 0.00 0.00 00:07:08.610 [2024-12-05T12:10:39.479Z] =================================================================================================================== 00:07:08.610 [2024-12-05T12:10:39.479Z] Total : 7350.00 28.71 0.00 0.00 0.00 0.00 0.00 00:07:08.611 00:07:09.549 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:09.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.549 Nvme0n1 : 2.00 7351.50 28.72 0.00 0.00 0.00 0.00 0.00 00:07:09.549 [2024-12-05T12:10:40.418Z] =================================================================================================================== 00:07:09.549 [2024-12-05T12:10:40.418Z] Total : 7351.50 28.72 0.00 0.00 0.00 0.00 0.00 00:07:09.549 00:07:09.808 true 00:07:09.808 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:09.808 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:10.067 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:10.067 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:10.067 12:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66673 00:07:10.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.634 Nvme0n1 : 3.00 7290.67 28.48 0.00 0.00 0.00 0.00 0.00 00:07:10.634 [2024-12-05T12:10:41.503Z] =================================================================================================================== 00:07:10.634 [2024-12-05T12:10:41.503Z] Total : 7290.67 28.48 0.00 0.00 0.00 0.00 0.00 00:07:10.634 00:07:11.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.569 Nvme0n1 : 4.00 7269.50 28.40 0.00 0.00 0.00 0.00 0.00 00:07:11.569 [2024-12-05T12:10:42.438Z] =================================================================================================================== 00:07:11.569 [2024-12-05T12:10:42.438Z] Total : 7269.50 28.40 0.00 0.00 0.00 0.00 0.00 00:07:11.569 00:07:12.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.506 Nvme0n1 : 5.00 7286.00 28.46 0.00 0.00 0.00 0.00 0.00 00:07:12.506 [2024-12-05T12:10:43.375Z] =================================================================================================================== 00:07:12.506 [2024-12-05T12:10:43.375Z] Total : 7286.00 28.46 0.00 0.00 0.00 0.00 0.00 00:07:12.506 00:07:13.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.483 Nvme0n1 : 6.00 7271.83 28.41 0.00 0.00 0.00 0.00 0.00 00:07:13.483 [2024-12-05T12:10:44.352Z] =================================================================================================================== 00:07:13.483 [2024-12-05T12:10:44.352Z] Total : 7271.83 28.41 0.00 0.00 0.00 0.00 0.00 00:07:13.483 00:07:14.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.862 Nvme0n1 : 7.00 7251.57 28.33 0.00 0.00 0.00 0.00 0.00 00:07:14.862 [2024-12-05T12:10:45.731Z] =================================================================================================================== 00:07:14.862 [2024-12-05T12:10:45.731Z] Total : 7251.57 28.33 0.00 0.00 0.00 0.00 0.00 00:07:14.862 00:07:15.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.796 Nvme0n1 : 8.00 7226.75 28.23 0.00 0.00 0.00 0.00 0.00 00:07:15.796 [2024-12-05T12:10:46.665Z] =================================================================================================================== 00:07:15.796 [2024-12-05T12:10:46.665Z] Total : 7226.75 28.23 0.00 0.00 0.00 0.00 0.00 00:07:15.796 00:07:16.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.734 Nvme0n1 : 9.00 7183.11 28.06 0.00 0.00 0.00 0.00 0.00 00:07:16.734 [2024-12-05T12:10:47.603Z] =================================================================================================================== 00:07:16.734 [2024-12-05T12:10:47.603Z] Total : 7183.11 28.06 0.00 0.00 0.00 0.00 0.00 00:07:16.734 00:07:17.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.669 Nvme0n1 : 10.00 7169.10 28.00 0.00 0.00 0.00 0.00 0.00 00:07:17.669 [2024-12-05T12:10:48.539Z] =================================================================================================================== 00:07:17.670 [2024-12-05T12:10:48.539Z] Total : 7169.10 28.00 0.00 0.00 0.00 0.00 0.00 00:07:17.670 00:07:17.670 00:07:17.670 Latency(us) 00:07:17.670 [2024-12-05T12:10:48.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.670 Nvme0n1 : 10.00 7178.45 28.04 0.00 0.00 17825.21 5600.35 47662.55 00:07:17.670 [2024-12-05T12:10:48.539Z] =================================================================================================================== 00:07:17.670 [2024-12-05T12:10:48.539Z] Total : 7178.45 28.04 0.00 0.00 17825.21 5600.35 47662.55 00:07:17.670 { 00:07:17.670 "results": [ 00:07:17.670 { 00:07:17.670 "job": "Nvme0n1", 00:07:17.670 "core_mask": "0x2", 00:07:17.670 "workload": "randwrite", 00:07:17.670 "status": "finished", 00:07:17.670 "queue_depth": 128, 00:07:17.670 "io_size": 4096, 00:07:17.670 "runtime": 10.004811, 00:07:17.670 "iops": 7178.446449413187, 00:07:17.670 "mibps": 28.040806443020262, 00:07:17.670 "io_failed": 0, 00:07:17.670 "io_timeout": 0, 00:07:17.670 "avg_latency_us": 17825.214584643974, 00:07:17.670 "min_latency_us": 5600.349090909091, 00:07:17.670 "max_latency_us": 47662.545454545456 00:07:17.670 } 00:07:17.670 ], 00:07:17.670 "core_count": 1 00:07:17.670 } 00:07:17.670 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66621 00:07:17.670 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 66621 ']' 00:07:17.670 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 66621 00:07:17.670 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:17.670 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.670 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66621 00:07:17.670 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:17.670 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:17.670 killing process with pid 66621 00:07:17.670 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66621' 00:07:17.670 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 66621 00:07:17.670 Received shutdown signal, test time was about 10.000000 seconds 00:07:17.670 00:07:17.670 Latency(us) 00:07:17.670 [2024-12-05T12:10:48.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.670 [2024-12-05T12:10:48.539Z] =================================================================================================================== 00:07:17.670 [2024-12-05T12:10:48.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:17.670 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 66621 00:07:17.927 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:18.185 12:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:18.443 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:18.443 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:18.700 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:18.700 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:18.700 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66023 00:07:18.700 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66023 00:07:18.700 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66023 Killed "${NVMF_APP[@]}" "$@" 00:07:18.700 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:18.700 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:18.700 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:18.700 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.700 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:18.958 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=66840 00:07:18.958 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:18.958 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 66840 00:07:18.958 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66840 ']' 00:07:18.958 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.958 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.958 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.958 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.958 12:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:18.958 [2024-12-05 12:10:49.640070] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:18.958 [2024-12-05 12:10:49.640177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.958 [2024-12-05 12:10:49.796644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.214 [2024-12-05 12:10:49.867331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.214 [2024-12-05 12:10:49.867432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.214 [2024-12-05 12:10:49.867451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.215 [2024-12-05 12:10:49.867465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.215 [2024-12-05 12:10:49.867478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.215 [2024-12-05 12:10:49.868004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.215 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.215 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:19.215 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:19.215 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:19.215 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:19.215 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.215 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:19.776 [2024-12-05 12:10:50.377490] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:19.776 [2024-12-05 12:10:50.377968] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:19.776 [2024-12-05 12:10:50.378197] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:19.776 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:19.776 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0b114737-c51f-4fd8-a614-8dab8716460a 00:07:19.776 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0b114737-c51f-4fd8-a614-8dab8716460a 00:07:19.776 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.776 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:19.776 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.776 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.776 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:20.033 12:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0b114737-c51f-4fd8-a614-8dab8716460a -t 2000 00:07:20.290 [ 00:07:20.290 { 00:07:20.290 "aliases": [ 00:07:20.290 "lvs/lvol" 00:07:20.290 ], 00:07:20.290 "assigned_rate_limits": { 00:07:20.290 "r_mbytes_per_sec": 0, 00:07:20.290 "rw_ios_per_sec": 0, 00:07:20.290 "rw_mbytes_per_sec": 0, 00:07:20.290 "w_mbytes_per_sec": 0 00:07:20.290 }, 00:07:20.290 "block_size": 4096, 00:07:20.290 "claimed": false, 00:07:20.290 "driver_specific": { 00:07:20.290 "lvol": { 00:07:20.290 "base_bdev": "aio_bdev", 00:07:20.290 "clone": false, 00:07:20.290 "esnap_clone": false, 00:07:20.290 "lvol_store_uuid": "0fdd31e3-2714-498d-a514-7e2d54177c71", 00:07:20.290 "num_allocated_clusters": 38, 00:07:20.290 "snapshot": false, 00:07:20.290 "thin_provision": false 00:07:20.290 } 00:07:20.290 }, 00:07:20.290 "name": "0b114737-c51f-4fd8-a614-8dab8716460a", 00:07:20.290 "num_blocks": 38912, 00:07:20.290 "product_name": "Logical Volume", 00:07:20.290 "supported_io_types": { 00:07:20.290 "abort": false, 00:07:20.290 "compare": false, 00:07:20.290 "compare_and_write": false, 00:07:20.290 "copy": false, 00:07:20.290 "flush": false, 00:07:20.290 "get_zone_info": false, 00:07:20.290 "nvme_admin": false, 00:07:20.290 "nvme_io": false, 00:07:20.290 "nvme_io_md": false, 00:07:20.290 "nvme_iov_md": false, 00:07:20.290 "read": true, 00:07:20.290 "reset": true, 00:07:20.290 "seek_data": true, 00:07:20.290 "seek_hole": true, 00:07:20.290 "unmap": true, 00:07:20.290 "write": true, 00:07:20.290 "write_zeroes": true, 00:07:20.290 "zcopy": false, 00:07:20.290 "zone_append": false, 00:07:20.290 "zone_management": false 00:07:20.290 }, 00:07:20.290 "uuid": "0b114737-c51f-4fd8-a614-8dab8716460a", 00:07:20.290 "zoned": false 00:07:20.290 } 00:07:20.290 ] 00:07:20.290 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:20.290 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:20.290 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:20.547 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:20.547 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:20.547 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:21.111 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:21.111 12:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:21.369 [2024-12-05 12:10:51.990406] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:21.369 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:21.369 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:21.369 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:21.369 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:21.369 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.369 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:21.369 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.369 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:21.369 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.369 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:21.369 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:21.369 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:21.627 2024/12/05 12:10:52 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:0fdd31e3-2714-498d-a514-7e2d54177c71], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:07:21.627 request: 00:07:21.627 { 00:07:21.627 "method": "bdev_lvol_get_lvstores", 00:07:21.627 "params": { 00:07:21.627 "uuid": "0fdd31e3-2714-498d-a514-7e2d54177c71" 00:07:21.627 } 00:07:21.627 } 00:07:21.627 Got JSON-RPC error response 00:07:21.627 GoRPCClient: error on JSON-RPC call 00:07:21.627 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:21.627 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.627 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:21.627 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.627 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.885 aio_bdev 00:07:22.144 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0b114737-c51f-4fd8-a614-8dab8716460a 00:07:22.144 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0b114737-c51f-4fd8-a614-8dab8716460a 00:07:22.144 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.144 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:22.144 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.144 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.144 12:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:22.402 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0b114737-c51f-4fd8-a614-8dab8716460a -t 2000 00:07:22.661 [ 00:07:22.661 { 00:07:22.661 "aliases": [ 00:07:22.661 "lvs/lvol" 00:07:22.661 ], 00:07:22.661 "assigned_rate_limits": { 00:07:22.661 "r_mbytes_per_sec": 0, 00:07:22.661 "rw_ios_per_sec": 0, 00:07:22.661 "rw_mbytes_per_sec": 0, 00:07:22.661 "w_mbytes_per_sec": 0 00:07:22.661 }, 00:07:22.661 "block_size": 4096, 00:07:22.661 "claimed": false, 00:07:22.661 "driver_specific": { 00:07:22.661 "lvol": { 00:07:22.661 "base_bdev": "aio_bdev", 00:07:22.661 "clone": false, 00:07:22.661 "esnap_clone": false, 00:07:22.661 "lvol_store_uuid": "0fdd31e3-2714-498d-a514-7e2d54177c71", 00:07:22.661 "num_allocated_clusters": 38, 00:07:22.661 "snapshot": false, 00:07:22.661 "thin_provision": false 00:07:22.661 } 00:07:22.661 }, 00:07:22.661 "name": "0b114737-c51f-4fd8-a614-8dab8716460a", 00:07:22.661 "num_blocks": 38912, 00:07:22.661 "product_name": "Logical Volume", 00:07:22.661 "supported_io_types": { 00:07:22.661 "abort": false, 00:07:22.661 "compare": false, 00:07:22.661 "compare_and_write": false, 00:07:22.661 "copy": false, 00:07:22.661 "flush": false, 00:07:22.661 "get_zone_info": false, 00:07:22.661 "nvme_admin": false, 00:07:22.661 "nvme_io": false, 00:07:22.661 "nvme_io_md": false, 00:07:22.661 "nvme_iov_md": false, 00:07:22.661 "read": true, 00:07:22.661 "reset": true, 00:07:22.661 "seek_data": true, 00:07:22.661 "seek_hole": true, 00:07:22.661 "unmap": true, 00:07:22.661 "write": true, 00:07:22.661 "write_zeroes": true, 00:07:22.661 "zcopy": false, 00:07:22.661 "zone_append": false, 00:07:22.661 "zone_management": false 00:07:22.661 }, 00:07:22.661 "uuid": "0b114737-c51f-4fd8-a614-8dab8716460a", 00:07:22.661 "zoned": false 00:07:22.661 } 00:07:22.661 ] 00:07:22.661 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:22.661 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:22.661 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:22.935 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:22.935 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:22.935 12:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:23.193 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:23.193 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0b114737-c51f-4fd8-a614-8dab8716460a 00:07:23.450 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0fdd31e3-2714-498d-a514-7e2d54177c71 00:07:23.708 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:23.966 12:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:24.533 ************************************ 00:07:24.533 END TEST lvs_grow_dirty 00:07:24.533 ************************************ 00:07:24.533 00:07:24.533 real 0m21.092s 00:07:24.533 user 0m41.945s 00:07:24.533 sys 0m9.364s 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:24.533 nvmf_trace.0 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:24.533 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:24.791 rmmod nvme_tcp 00:07:24.791 rmmod nvme_fabrics 00:07:24.791 rmmod nvme_keyring 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 66840 ']' 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 66840 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 66840 ']' 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 66840 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66840 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.791 killing process with pid 66840 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66840' 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 66840 00:07:24.791 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 66840 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:25.072 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:25.333 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:25.333 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:25.333 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:25.333 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:25.333 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.333 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.333 12:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:07:25.333 00:07:25.333 real 0m41.917s 00:07:25.333 user 1m6.672s 00:07:25.333 sys 0m12.500s 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.333 ************************************ 00:07:25.333 END TEST nvmf_lvs_grow 00:07:25.333 ************************************ 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:25.333 ************************************ 00:07:25.333 START TEST nvmf_bdev_io_wait 00:07:25.333 ************************************ 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:25.333 * Looking for test storage... 00:07:25.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:25.333 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.334 --rc genhtml_branch_coverage=1 00:07:25.334 --rc genhtml_function_coverage=1 00:07:25.334 --rc genhtml_legend=1 00:07:25.334 --rc geninfo_all_blocks=1 00:07:25.334 --rc geninfo_unexecuted_blocks=1 00:07:25.334 00:07:25.334 ' 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.334 --rc genhtml_branch_coverage=1 00:07:25.334 --rc genhtml_function_coverage=1 00:07:25.334 --rc genhtml_legend=1 00:07:25.334 --rc geninfo_all_blocks=1 00:07:25.334 --rc geninfo_unexecuted_blocks=1 00:07:25.334 00:07:25.334 ' 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.334 --rc genhtml_branch_coverage=1 00:07:25.334 --rc genhtml_function_coverage=1 00:07:25.334 --rc genhtml_legend=1 00:07:25.334 --rc geninfo_all_blocks=1 00:07:25.334 --rc geninfo_unexecuted_blocks=1 00:07:25.334 00:07:25.334 ' 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.334 --rc genhtml_branch_coverage=1 00:07:25.334 --rc genhtml_function_coverage=1 00:07:25.334 --rc genhtml_legend=1 00:07:25.334 --rc geninfo_all_blocks=1 00:07:25.334 --rc geninfo_unexecuted_blocks=1 00:07:25.334 00:07:25.334 ' 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.334 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:25.591 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:25.591 Cannot find device "nvmf_init_br" 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:25.591 Cannot find device "nvmf_init_br2" 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:25.591 Cannot find device "nvmf_tgt_br" 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:25.591 Cannot find device "nvmf_tgt_br2" 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:25.591 Cannot find device "nvmf_init_br" 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:25.591 Cannot find device "nvmf_init_br2" 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:25.591 Cannot find device "nvmf_tgt_br" 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:25.591 Cannot find device "nvmf_tgt_br2" 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:25.591 Cannot find device "nvmf_br" 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:25.591 Cannot find device "nvmf_init_if" 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:25.591 Cannot find device "nvmf_init_if2" 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:25.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:25.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:25.591 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:25.592 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:25.592 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:25.592 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:25.592 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:25.592 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:25.592 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:25.592 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:25.592 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:25.592 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:25.592 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:25.849 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:25.849 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:07:25.849 00:07:25.849 --- 10.0.0.3 ping statistics --- 00:07:25.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.849 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:25.849 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:25.849 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:07:25.849 00:07:25.849 --- 10.0.0.4 ping statistics --- 00:07:25.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.849 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:25.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:25.849 00:07:25.849 --- 10.0.0.1 ping statistics --- 00:07:25.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.849 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:25.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:07:25.849 00:07:25.849 --- 10.0.0.2 ping statistics --- 00:07:25.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.849 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67312 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67312 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67312 ']' 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.849 12:10:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.849 [2024-12-05 12:10:56.606108] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:25.849 [2024-12-05 12:10:56.606200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.108 [2024-12-05 12:10:56.751231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.108 [2024-12-05 12:10:56.804200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.108 [2024-12-05 12:10:56.804264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.108 [2024-12-05 12:10:56.804275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.108 [2024-12-05 12:10:56.804282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.108 [2024-12-05 12:10:56.804289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.108 [2024-12-05 12:10:56.805690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.108 [2024-12-05 12:10:56.805863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.108 [2024-12-05 12:10:56.805968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.108 [2024-12-05 12:10:56.806082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.044 [2024-12-05 12:10:57.697681] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.044 Malloc0 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.044 [2024-12-05 12:10:57.753225] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67365 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67367 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.044 { 00:07:27.044 "params": { 00:07:27.044 "name": "Nvme$subsystem", 00:07:27.044 "trtype": "$TEST_TRANSPORT", 00:07:27.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.044 "adrfam": "ipv4", 00:07:27.044 "trsvcid": "$NVMF_PORT", 00:07:27.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.044 "hdgst": ${hdgst:-false}, 00:07:27.044 "ddgst": ${ddgst:-false} 00:07:27.044 }, 00:07:27.044 "method": "bdev_nvme_attach_controller" 00:07:27.044 } 00:07:27.044 EOF 00:07:27.044 )") 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67369 00:07:27.044 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.045 { 00:07:27.045 "params": { 00:07:27.045 "name": "Nvme$subsystem", 00:07:27.045 "trtype": "$TEST_TRANSPORT", 00:07:27.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.045 "adrfam": "ipv4", 00:07:27.045 "trsvcid": "$NVMF_PORT", 00:07:27.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.045 "hdgst": ${hdgst:-false}, 00:07:27.045 "ddgst": ${ddgst:-false} 00:07:27.045 }, 00:07:27.045 "method": "bdev_nvme_attach_controller" 00:07:27.045 } 00:07:27.045 EOF 00:07:27.045 )") 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67372 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.045 { 00:07:27.045 "params": { 00:07:27.045 "name": "Nvme$subsystem", 00:07:27.045 "trtype": "$TEST_TRANSPORT", 00:07:27.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.045 "adrfam": "ipv4", 00:07:27.045 "trsvcid": "$NVMF_PORT", 00:07:27.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.045 "hdgst": ${hdgst:-false}, 00:07:27.045 "ddgst": ${ddgst:-false} 00:07:27.045 }, 00:07:27.045 "method": "bdev_nvme_attach_controller" 00:07:27.045 } 00:07:27.045 EOF 00:07:27.045 )") 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.045 "params": { 00:07:27.045 "name": "Nvme1", 00:07:27.045 "trtype": "tcp", 00:07:27.045 "traddr": "10.0.0.3", 00:07:27.045 "adrfam": "ipv4", 00:07:27.045 "trsvcid": "4420", 00:07:27.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:27.045 "hdgst": false, 00:07:27.045 "ddgst": false 00:07:27.045 }, 00:07:27.045 "method": "bdev_nvme_attach_controller" 00:07:27.045 }' 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.045 { 00:07:27.045 "params": { 00:07:27.045 "name": "Nvme$subsystem", 00:07:27.045 "trtype": "$TEST_TRANSPORT", 00:07:27.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.045 "adrfam": "ipv4", 00:07:27.045 "trsvcid": "$NVMF_PORT", 00:07:27.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.045 "hdgst": ${hdgst:-false}, 00:07:27.045 "ddgst": ${ddgst:-false} 00:07:27.045 }, 00:07:27.045 "method": "bdev_nvme_attach_controller" 00:07:27.045 } 00:07:27.045 EOF 00:07:27.045 )") 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.045 "params": { 00:07:27.045 "name": "Nvme1", 00:07:27.045 "trtype": "tcp", 00:07:27.045 "traddr": "10.0.0.3", 00:07:27.045 "adrfam": "ipv4", 00:07:27.045 "trsvcid": "4420", 00:07:27.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:27.045 "hdgst": false, 00:07:27.045 "ddgst": false 00:07:27.045 }, 00:07:27.045 "method": "bdev_nvme_attach_controller" 00:07:27.045 }' 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.045 "params": { 00:07:27.045 "name": "Nvme1", 00:07:27.045 "trtype": "tcp", 00:07:27.045 "traddr": "10.0.0.3", 00:07:27.045 "adrfam": "ipv4", 00:07:27.045 "trsvcid": "4420", 00:07:27.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:27.045 "hdgst": false, 00:07:27.045 "ddgst": false 00:07:27.045 }, 00:07:27.045 "method": "bdev_nvme_attach_controller" 00:07:27.045 }' 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.045 "params": { 00:07:27.045 "name": "Nvme1", 00:07:27.045 "trtype": "tcp", 00:07:27.045 "traddr": "10.0.0.3", 00:07:27.045 "adrfam": "ipv4", 00:07:27.045 "trsvcid": "4420", 00:07:27.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:27.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:27.045 "hdgst": false, 00:07:27.045 "ddgst": false 00:07:27.045 }, 00:07:27.045 "method": "bdev_nvme_attach_controller" 00:07:27.045 }' 00:07:27.045 [2024-12-05 12:10:57.814468] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:27.045 [2024-12-05 12:10:57.814588] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:27.045 12:10:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67365 00:07:27.045 [2024-12-05 12:10:57.834937] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:27.045 [2024-12-05 12:10:57.835429] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:27.045 [2024-12-05 12:10:57.846083] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:27.045 [2024-12-05 12:10:57.846160] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:27.045 [2024-12-05 12:10:57.866413] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:27.045 [2024-12-05 12:10:57.866537] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:27.304 [2024-12-05 12:10:58.026043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.304 [2024-12-05 12:10:58.071289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:27.304 [2024-12-05 12:10:58.105042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.304 [2024-12-05 12:10:58.152150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:27.563 [2024-12-05 12:10:58.182613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.563 [2024-12-05 12:10:58.241995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:27.563 [2024-12-05 12:10:58.269947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.563 Running I/O for 1 seconds... 00:07:27.563 Running I/O for 1 seconds... 00:07:27.563 [2024-12-05 12:10:58.342306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:27.563 Running I/O for 1 seconds... 00:07:27.821 Running I/O for 1 seconds... 00:07:28.757 189648.00 IOPS, 740.81 MiB/s 00:07:28.757 Latency(us) 00:07:28.757 [2024-12-05T12:10:59.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.757 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:28.757 Nvme1n1 : 1.00 189285.19 739.40 0.00 0.00 672.58 310.92 1891.61 00:07:28.757 [2024-12-05T12:10:59.626Z] =================================================================================================================== 00:07:28.757 [2024-12-05T12:10:59.626Z] Total : 189285.19 739.40 0.00 0.00 672.58 310.92 1891.61 00:07:28.757 9914.00 IOPS, 38.73 MiB/s 00:07:28.757 Latency(us) 00:07:28.757 [2024-12-05T12:10:59.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.757 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:28.757 Nvme1n1 : 1.01 9954.87 38.89 0.00 0.00 12797.99 7536.64 18588.39 00:07:28.757 [2024-12-05T12:10:59.626Z] =================================================================================================================== 00:07:28.757 [2024-12-05T12:10:59.626Z] Total : 9954.87 38.89 0.00 0.00 12797.99 7536.64 18588.39 00:07:28.757 8821.00 IOPS, 34.46 MiB/s 00:07:28.757 Latency(us) 00:07:28.757 [2024-12-05T12:10:59.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.757 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:28.757 Nvme1n1 : 1.01 8898.11 34.76 0.00 0.00 14324.08 5898.24 22401.40 00:07:28.757 [2024-12-05T12:10:59.626Z] =================================================================================================================== 00:07:28.757 [2024-12-05T12:10:59.626Z] Total : 8898.11 34.76 0.00 0.00 14324.08 5898.24 22401.40 00:07:28.757 8503.00 IOPS, 33.21 MiB/s 00:07:28.757 Latency(us) 00:07:28.757 [2024-12-05T12:10:59.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.757 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:28.757 Nvme1n1 : 1.01 8578.75 33.51 0.00 0.00 14855.70 6494.02 26333.56 00:07:28.758 [2024-12-05T12:10:59.627Z] =================================================================================================================== 00:07:28.758 [2024-12-05T12:10:59.627Z] Total : 8578.75 33.51 0.00 0.00 14855.70 6494.02 26333.56 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67367 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67369 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67372 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:29.016 rmmod nvme_tcp 00:07:29.016 rmmod nvme_fabrics 00:07:29.016 rmmod nvme_keyring 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67312 ']' 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67312 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67312 ']' 00:07:29.016 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67312 00:07:29.017 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:29.017 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.017 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67312 00:07:29.017 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.017 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.017 killing process with pid 67312 00:07:29.017 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67312' 00:07:29.017 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67312 00:07:29.017 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67312 00:07:29.275 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:29.275 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:29.275 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:29.275 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:29.275 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:29.275 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:29.275 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:29.275 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:29.275 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:29.275 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:29.275 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:29.275 12:10:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:29.275 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.275 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:29.275 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:29.275 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:29.275 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:29.275 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:29.275 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:29.275 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:29.276 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:07:29.536 00:07:29.536 real 0m4.161s 00:07:29.536 user 0m17.225s 00:07:29.536 sys 0m2.147s 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:29.536 ************************************ 00:07:29.536 END TEST nvmf_bdev_io_wait 00:07:29.536 ************************************ 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.536 ************************************ 00:07:29.536 START TEST nvmf_queue_depth 00:07:29.536 ************************************ 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:29.536 * Looking for test storage... 00:07:29.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.536 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.795 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.795 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.795 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.795 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.795 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.795 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.795 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.795 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.795 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.795 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.796 --rc genhtml_branch_coverage=1 00:07:29.796 --rc genhtml_function_coverage=1 00:07:29.796 --rc genhtml_legend=1 00:07:29.796 --rc geninfo_all_blocks=1 00:07:29.796 --rc geninfo_unexecuted_blocks=1 00:07:29.796 00:07:29.796 ' 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.796 --rc genhtml_branch_coverage=1 00:07:29.796 --rc genhtml_function_coverage=1 00:07:29.796 --rc genhtml_legend=1 00:07:29.796 --rc geninfo_all_blocks=1 00:07:29.796 --rc geninfo_unexecuted_blocks=1 00:07:29.796 00:07:29.796 ' 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.796 --rc genhtml_branch_coverage=1 00:07:29.796 --rc genhtml_function_coverage=1 00:07:29.796 --rc genhtml_legend=1 00:07:29.796 --rc geninfo_all_blocks=1 00:07:29.796 --rc geninfo_unexecuted_blocks=1 00:07:29.796 00:07:29.796 ' 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.796 --rc genhtml_branch_coverage=1 00:07:29.796 --rc genhtml_function_coverage=1 00:07:29.796 --rc genhtml_legend=1 00:07:29.796 --rc geninfo_all_blocks=1 00:07:29.796 --rc geninfo_unexecuted_blocks=1 00:07:29.796 00:07:29.796 ' 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.796 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.796 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:29.797 Cannot find device "nvmf_init_br" 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:29.797 Cannot find device "nvmf_init_br2" 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:29.797 Cannot find device "nvmf_tgt_br" 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.797 Cannot find device "nvmf_tgt_br2" 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:29.797 Cannot find device "nvmf_init_br" 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:29.797 Cannot find device "nvmf_init_br2" 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:29.797 Cannot find device "nvmf_tgt_br" 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:29.797 Cannot find device "nvmf_tgt_br2" 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:29.797 Cannot find device "nvmf_br" 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:29.797 Cannot find device "nvmf_init_if" 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:29.797 Cannot find device "nvmf_init_if2" 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:29.797 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:30.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:30.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:30.056 00:07:30.056 --- 10.0.0.3 ping statistics --- 00:07:30.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.056 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:30.056 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:30.056 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:07:30.056 00:07:30.056 --- 10.0.0.4 ping statistics --- 00:07:30.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.056 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:30.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:07:30.056 00:07:30.056 --- 10.0.0.1 ping statistics --- 00:07:30.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.056 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:30.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:07:30.056 00:07:30.056 --- 10.0.0.2 ping statistics --- 00:07:30.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.056 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=67654 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 67654 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67654 ']' 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.056 12:11:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:30.056 [2024-12-05 12:11:00.916165] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:30.056 [2024-12-05 12:11:00.916228] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.315 [2024-12-05 12:11:01.061688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.315 [2024-12-05 12:11:01.130142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.315 [2024-12-05 12:11:01.130450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.315 [2024-12-05 12:11:01.130623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.315 [2024-12-05 12:11:01.130762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.315 [2024-12-05 12:11:01.130800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.315 [2024-12-05 12:11:01.131303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:30.574 [2024-12-05 12:11:01.330184] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:30.574 Malloc0 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:30.574 [2024-12-05 12:11:01.389259] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67685 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67685 /var/tmp/bdevperf.sock 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67685 ']' 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.574 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:30.833 [2024-12-05 12:11:01.458066] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:30.833 [2024-12-05 12:11:01.458154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67685 ] 00:07:30.833 [2024-12-05 12:11:01.612634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.833 [2024-12-05 12:11:01.678913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.092 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.092 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:31.092 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:31.092 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.092 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:31.092 NVMe0n1 00:07:31.092 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.092 12:11:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:31.351 Running I/O for 10 seconds... 00:07:33.224 8263.00 IOPS, 32.28 MiB/s [2024-12-05T12:11:05.468Z] 8532.50 IOPS, 33.33 MiB/s [2024-12-05T12:11:06.035Z] 8734.67 IOPS, 34.12 MiB/s [2024-12-05T12:11:07.418Z] 8950.00 IOPS, 34.96 MiB/s [2024-12-05T12:11:08.356Z] 9199.80 IOPS, 35.94 MiB/s [2024-12-05T12:11:09.293Z] 9309.50 IOPS, 36.37 MiB/s [2024-12-05T12:11:10.226Z] 9242.86 IOPS, 36.10 MiB/s [2024-12-05T12:11:11.156Z] 9323.88 IOPS, 36.42 MiB/s [2024-12-05T12:11:12.137Z] 9237.00 IOPS, 36.08 MiB/s [2024-12-05T12:11:12.137Z] 9185.00 IOPS, 35.88 MiB/s 00:07:41.268 Latency(us) 00:07:41.268 [2024-12-05T12:11:12.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.268 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:41.268 Verification LBA range: start 0x0 length 0x4000 00:07:41.268 NVMe0n1 : 10.09 9205.71 35.96 0.00 0.00 110691.02 27763.43 88175.71 00:07:41.268 [2024-12-05T12:11:12.137Z] =================================================================================================================== 00:07:41.268 [2024-12-05T12:11:12.137Z] Total : 9205.71 35.96 0.00 0.00 110691.02 27763.43 88175.71 00:07:41.268 { 00:07:41.268 "results": [ 00:07:41.268 { 00:07:41.268 "job": "NVMe0n1", 00:07:41.268 "core_mask": "0x1", 00:07:41.268 "workload": "verify", 00:07:41.268 "status": "finished", 00:07:41.268 "verify_range": { 00:07:41.268 "start": 0, 00:07:41.268 "length": 16384 00:07:41.268 }, 00:07:41.268 "queue_depth": 1024, 00:07:41.268 "io_size": 4096, 00:07:41.268 "runtime": 10.087002, 00:07:41.268 "iops": 9205.708494952216, 00:07:41.268 "mibps": 35.95979880840709, 00:07:41.268 "io_failed": 0, 00:07:41.268 "io_timeout": 0, 00:07:41.268 "avg_latency_us": 110691.01861994561, 00:07:41.268 "min_latency_us": 27763.432727272728, 00:07:41.268 "max_latency_us": 88175.70909090909 00:07:41.268 } 00:07:41.268 ], 00:07:41.268 "core_count": 1 00:07:41.268 } 00:07:41.268 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67685 00:07:41.268 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67685 ']' 00:07:41.268 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67685 00:07:41.268 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:41.268 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.268 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67685 00:07:41.525 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.525 killing process with pid 67685 00:07:41.525 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.525 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67685' 00:07:41.525 Received shutdown signal, test time was about 10.000000 seconds 00:07:41.525 00:07:41.525 Latency(us) 00:07:41.525 [2024-12-05T12:11:12.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.525 [2024-12-05T12:11:12.394Z] =================================================================================================================== 00:07:41.525 [2024-12-05T12:11:12.394Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:41.525 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67685 00:07:41.525 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67685 00:07:41.525 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:41.525 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:41.525 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.525 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.782 rmmod nvme_tcp 00:07:41.782 rmmod nvme_fabrics 00:07:41.782 rmmod nvme_keyring 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 67654 ']' 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 67654 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67654 ']' 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67654 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67654 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:41.782 killing process with pid 67654 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67654' 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67654 00:07:41.782 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67654 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:07:42.039 00:07:42.039 real 0m12.637s 00:07:42.039 user 0m21.403s 00:07:42.039 sys 0m2.083s 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.039 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.039 ************************************ 00:07:42.039 END TEST nvmf_queue_depth 00:07:42.039 ************************************ 00:07:42.298 12:11:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:42.298 12:11:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.298 12:11:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.298 12:11:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.298 ************************************ 00:07:42.298 START TEST nvmf_target_multipath 00:07:42.298 ************************************ 00:07:42.298 12:11:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:42.298 * Looking for test storage... 00:07:42.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.298 --rc genhtml_branch_coverage=1 00:07:42.298 --rc genhtml_function_coverage=1 00:07:42.298 --rc genhtml_legend=1 00:07:42.298 --rc geninfo_all_blocks=1 00:07:42.298 --rc geninfo_unexecuted_blocks=1 00:07:42.298 00:07:42.298 ' 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.298 --rc genhtml_branch_coverage=1 00:07:42.298 --rc genhtml_function_coverage=1 00:07:42.298 --rc genhtml_legend=1 00:07:42.298 --rc geninfo_all_blocks=1 00:07:42.298 --rc geninfo_unexecuted_blocks=1 00:07:42.298 00:07:42.298 ' 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.298 --rc genhtml_branch_coverage=1 00:07:42.298 --rc genhtml_function_coverage=1 00:07:42.298 --rc genhtml_legend=1 00:07:42.298 --rc geninfo_all_blocks=1 00:07:42.298 --rc geninfo_unexecuted_blocks=1 00:07:42.298 00:07:42.298 ' 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.298 --rc genhtml_branch_coverage=1 00:07:42.298 --rc genhtml_function_coverage=1 00:07:42.298 --rc genhtml_legend=1 00:07:42.298 --rc geninfo_all_blocks=1 00:07:42.298 --rc geninfo_unexecuted_blocks=1 00:07:42.298 00:07:42.298 ' 00:07:42.298 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.299 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:42.299 Cannot find device "nvmf_init_br" 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:42.299 Cannot find device "nvmf_init_br2" 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:07:42.299 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:42.558 Cannot find device "nvmf_tgt_br" 00:07:42.558 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:07:42.558 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:42.558 Cannot find device "nvmf_tgt_br2" 00:07:42.558 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:07:42.558 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:42.558 Cannot find device "nvmf_init_br" 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:42.559 Cannot find device "nvmf_init_br2" 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:42.559 Cannot find device "nvmf_tgt_br" 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:42.559 Cannot find device "nvmf_tgt_br2" 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:42.559 Cannot find device "nvmf_br" 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:42.559 Cannot find device "nvmf_init_if" 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:42.559 Cannot find device "nvmf_init_if2" 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:42.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:42.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:42.559 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:42.817 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:42.817 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:07:42.817 00:07:42.817 --- 10.0.0.3 ping statistics --- 00:07:42.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.817 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:42.817 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:42.817 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:07:42.817 00:07:42.817 --- 10.0.0.4 ping statistics --- 00:07:42.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.817 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:42.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:42.817 00:07:42.817 --- 10.0.0.1 ping statistics --- 00:07:42.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.817 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:42.817 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:42.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:07:42.817 00:07:42.817 --- 10.0.0.2 ping statistics --- 00:07:42.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.818 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=68061 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 68061 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 68061 ']' 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.818 12:11:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:42.818 [2024-12-05 12:11:13.575038] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:07:42.818 [2024-12-05 12:11:13.575422] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.076 [2024-12-05 12:11:13.718893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.076 [2024-12-05 12:11:13.780512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.076 [2024-12-05 12:11:13.780742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.076 [2024-12-05 12:11:13.780925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.076 [2024-12-05 12:11:13.781089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.076 [2024-12-05 12:11:13.781129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.076 [2024-12-05 12:11:13.782472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.076 [2024-12-05 12:11:13.782763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.076 [2024-12-05 12:11:13.783416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.076 [2024-12-05 12:11:13.783420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.013 12:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.013 12:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:07:44.013 12:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:44.013 12:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:44.013 12:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:44.013 12:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.013 12:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:44.013 [2024-12-05 12:11:14.792157] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.013 12:11:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:07:44.272 Malloc0 00:07:44.272 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:07:44.530 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:44.788 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:45.046 [2024-12-05 12:11:15.845816] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:45.046 12:11:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:07:45.305 [2024-12-05 12:11:16.093997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:07:45.305 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:07:45.563 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:07:45.821 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:07:45.821 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:07:45.821 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:07:45.821 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:07:45.821 12:11:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68199 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:07:47.722 12:11:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:07:47.722 [global] 00:07:47.722 thread=1 00:07:47.722 invalidate=1 00:07:47.722 rw=randrw 00:07:47.722 time_based=1 00:07:47.722 runtime=6 00:07:47.722 ioengine=libaio 00:07:47.722 direct=1 00:07:47.722 bs=4096 00:07:47.722 iodepth=128 00:07:47.722 norandommap=0 00:07:47.722 numjobs=1 00:07:47.722 00:07:47.722 verify_dump=1 00:07:47.722 verify_backlog=512 00:07:47.722 verify_state_save=0 00:07:47.723 do_verify=1 00:07:47.723 verify=crc32c-intel 00:07:47.723 [job0] 00:07:47.723 filename=/dev/nvme0n1 00:07:47.981 Could not set queue depth (nvme0n1) 00:07:47.981 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:47.981 fio-3.35 00:07:47.981 Starting 1 thread 00:07:48.935 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:07:49.193 12:11:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:49.452 12:11:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:07:50.388 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:07:50.388 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:50.388 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:50.388 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:07:50.647 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:07:51.214 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:07:51.214 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:07:51.214 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:51.214 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:51.215 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:51.215 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:51.215 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:07:51.215 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:07:51.215 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:51.215 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:51.215 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:51.215 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:51.215 12:11:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:07:52.149 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:07:52.149 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:52.149 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:52.149 12:11:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68199 00:07:54.053 00:07:54.053 job0: (groupid=0, jobs=1): err= 0: pid=68225: Thu Dec 5 12:11:24 2024 00:07:54.053 read: IOPS=10.8k, BW=42.0MiB/s (44.1MB/s)(252MiB/6006msec) 00:07:54.054 slat (usec): min=3, max=5409, avg=53.10, stdev=234.14 00:07:54.054 clat (usec): min=1067, max=17155, avg=8075.08, stdev=1272.52 00:07:54.054 lat (usec): min=1088, max=17172, avg=8128.19, stdev=1282.23 00:07:54.054 clat percentiles (usec): 00:07:54.054 | 1.00th=[ 4817], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7242], 00:07:54.054 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8225], 00:07:54.054 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[10159], 00:07:54.054 | 99.00th=[11994], 99.50th=[12518], 99.90th=[13960], 99.95th=[15401], 00:07:54.054 | 99.99th=[15664] 00:07:54.054 bw ( KiB/s): min= 8544, max=29512, per=53.31%, avg=22949.82, stdev=6509.37, samples=11 00:07:54.054 iops : min= 2136, max= 7378, avg=5737.45, stdev=1627.34, samples=11 00:07:54.054 write: IOPS=6449, BW=25.2MiB/s (26.4MB/s)(136MiB/5390msec); 0 zone resets 00:07:54.054 slat (usec): min=11, max=2150, avg=63.25, stdev=157.90 00:07:54.054 clat (usec): min=777, max=12986, avg=6901.69, stdev=1030.06 00:07:54.054 lat (usec): min=824, max=13018, avg=6964.93, stdev=1031.86 00:07:54.054 clat percentiles (usec): 00:07:54.054 | 1.00th=[ 3818], 5.00th=[ 5014], 10.00th=[ 5866], 20.00th=[ 6325], 00:07:54.054 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:07:54.054 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8225], 00:07:54.054 | 99.00th=[ 9896], 99.50th=[10683], 99.90th=[11994], 99.95th=[12387], 00:07:54.054 | 99.99th=[12911] 00:07:54.054 bw ( KiB/s): min= 8592, max=28800, per=88.97%, avg=22953.45, stdev=6249.44, samples=11 00:07:54.054 iops : min= 2148, max= 7200, avg=5738.36, stdev=1562.36, samples=11 00:07:54.054 lat (usec) : 1000=0.01% 00:07:54.054 lat (msec) : 2=0.05%, 4=0.62%, 10=95.16%, 20=4.16% 00:07:54.054 cpu : usr=5.50%, sys=24.53%, ctx=6368, majf=0, minf=102 00:07:54.054 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:07:54.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:54.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:54.054 issued rwts: total=64631,34763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:54.054 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:54.054 00:07:54.054 Run status group 0 (all jobs): 00:07:54.054 READ: bw=42.0MiB/s (44.1MB/s), 42.0MiB/s-42.0MiB/s (44.1MB/s-44.1MB/s), io=252MiB (265MB), run=6006-6006msec 00:07:54.054 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=136MiB (142MB), run=5390-5390msec 00:07:54.054 00:07:54.054 Disk stats (read/write): 00:07:54.054 nvme0n1: ios=63690/34105, merge=0/0, ticks=480677/219770, in_queue=700447, util=98.51% 00:07:54.312 12:11:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:07:54.575 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:07:54.832 12:11:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:07:55.766 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:07:55.766 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:55.766 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:07:55.766 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:07:55.766 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68360 00:07:55.766 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:07:55.766 12:11:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:07:55.766 [global] 00:07:55.766 thread=1 00:07:55.766 invalidate=1 00:07:55.766 rw=randrw 00:07:55.766 time_based=1 00:07:55.766 runtime=6 00:07:55.766 ioengine=libaio 00:07:55.766 direct=1 00:07:55.766 bs=4096 00:07:55.766 iodepth=128 00:07:55.766 norandommap=0 00:07:55.766 numjobs=1 00:07:55.766 00:07:55.766 verify_dump=1 00:07:55.766 verify_backlog=512 00:07:55.766 verify_state_save=0 00:07:55.766 do_verify=1 00:07:55.766 verify=crc32c-intel 00:07:55.766 [job0] 00:07:55.766 filename=/dev/nvme0n1 00:07:55.766 Could not set queue depth (nvme0n1) 00:07:56.024 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:56.024 fio-3.35 00:07:56.024 Starting 1 thread 00:07:56.962 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:07:57.220 12:11:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:57.479 12:11:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:07:58.523 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:07:58.523 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:58.523 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:58.523 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:07:58.782 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:59.041 12:11:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:07:59.978 12:11:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:07:59.978 12:11:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:07:59.978 12:11:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:07:59.978 12:11:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68360 00:08:02.513 00:08:02.513 job0: (groupid=0, jobs=1): err= 0: pid=68381: Thu Dec 5 12:11:32 2024 00:08:02.513 read: IOPS=12.1k, BW=47.2MiB/s (49.5MB/s)(284MiB/6006msec) 00:08:02.513 slat (usec): min=2, max=5453, avg=41.74, stdev=199.30 00:08:02.513 clat (usec): min=294, max=14838, avg=7279.99, stdev=1555.57 00:08:02.513 lat (usec): min=330, max=15494, avg=7321.74, stdev=1572.33 00:08:02.513 clat percentiles (usec): 00:08:02.513 | 1.00th=[ 2966], 5.00th=[ 4490], 10.00th=[ 5145], 20.00th=[ 6063], 00:08:02.513 | 30.00th=[ 6915], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7635], 00:08:02.513 | 70.00th=[ 8029], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[ 9503], 00:08:02.513 | 99.00th=[11207], 99.50th=[11600], 99.90th=[12649], 99.95th=[13042], 00:08:02.513 | 99.99th=[13829] 00:08:02.513 bw ( KiB/s): min= 4776, max=40272, per=53.57%, avg=25909.82, stdev=10555.92, samples=11 00:08:02.513 iops : min= 1194, max=10068, avg=6477.45, stdev=2638.98, samples=11 00:08:02.513 write: IOPS=7516, BW=29.4MiB/s (30.8MB/s)(152MiB/5164msec); 0 zone resets 00:08:02.513 slat (usec): min=14, max=1848, avg=52.77, stdev=130.37 00:08:02.513 clat (usec): min=332, max=12700, avg=6018.00, stdev=1558.69 00:08:02.513 lat (usec): min=377, max=12722, avg=6070.77, stdev=1571.96 00:08:02.513 clat percentiles (usec): 00:08:02.513 | 1.00th=[ 2573], 5.00th=[ 3326], 10.00th=[ 3752], 20.00th=[ 4359], 00:08:02.513 | 30.00th=[ 5080], 40.00th=[ 6063], 50.00th=[ 6456], 60.00th=[ 6783], 00:08:02.513 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7635], 95.00th=[ 7898], 00:08:02.513 | 99.00th=[ 8979], 99.50th=[10159], 99.90th=[11600], 99.95th=[12256], 00:08:02.513 | 99.99th=[12649] 00:08:02.513 bw ( KiB/s): min= 4640, max=40176, per=86.23%, avg=25925.09, stdev=10418.74, samples=11 00:08:02.513 iops : min= 1160, max=10044, avg=6481.27, stdev=2604.69, samples=11 00:08:02.513 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:08:02.513 lat (msec) : 2=0.31%, 4=6.21%, 10=91.37%, 20=2.06% 00:08:02.513 cpu : usr=6.24%, sys=26.56%, ctx=7442, majf=0, minf=163 00:08:02.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:08:02.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:02.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:02.513 issued rwts: total=72619,38816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:02.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:02.513 00:08:02.513 Run status group 0 (all jobs): 00:08:02.513 READ: bw=47.2MiB/s (49.5MB/s), 47.2MiB/s-47.2MiB/s (49.5MB/s-49.5MB/s), io=284MiB (297MB), run=6006-6006msec 00:08:02.513 WRITE: bw=29.4MiB/s (30.8MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.8MB/s), io=152MiB (159MB), run=5164-5164msec 00:08:02.513 00:08:02.513 Disk stats (read/write): 00:08:02.513 nvme0n1: ios=71693/38171, merge=0/0, ticks=485148/210752, in_queue=695900, util=98.66% 00:08:02.513 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:02.513 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.513 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:08:02.513 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:02.513 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.513 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:02.513 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.513 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:08:02.513 12:11:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:02.513 rmmod nvme_tcp 00:08:02.513 rmmod nvme_fabrics 00:08:02.513 rmmod nvme_keyring 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 68061 ']' 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 68061 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 68061 ']' 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 68061 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.513 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68061 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.772 killing process with pid 68061 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68061' 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 68061 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 68061 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:02.772 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:03.032 00:08:03.032 real 0m20.880s 00:08:03.032 user 1m21.052s 00:08:03.032 sys 0m7.242s 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:03.032 ************************************ 00:08:03.032 END TEST nvmf_target_multipath 00:08:03.032 ************************************ 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:03.032 ************************************ 00:08:03.032 START TEST nvmf_zcopy 00:08:03.032 ************************************ 00:08:03.032 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:03.293 * Looking for test storage... 00:08:03.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:03.293 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:03.293 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:03.293 12:11:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:03.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.293 --rc genhtml_branch_coverage=1 00:08:03.293 --rc genhtml_function_coverage=1 00:08:03.293 --rc genhtml_legend=1 00:08:03.293 --rc geninfo_all_blocks=1 00:08:03.293 --rc geninfo_unexecuted_blocks=1 00:08:03.293 00:08:03.293 ' 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:03.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.293 --rc genhtml_branch_coverage=1 00:08:03.293 --rc genhtml_function_coverage=1 00:08:03.293 --rc genhtml_legend=1 00:08:03.293 --rc geninfo_all_blocks=1 00:08:03.293 --rc geninfo_unexecuted_blocks=1 00:08:03.293 00:08:03.293 ' 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:03.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.293 --rc genhtml_branch_coverage=1 00:08:03.293 --rc genhtml_function_coverage=1 00:08:03.293 --rc genhtml_legend=1 00:08:03.293 --rc geninfo_all_blocks=1 00:08:03.293 --rc geninfo_unexecuted_blocks=1 00:08:03.293 00:08:03.293 ' 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:03.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.293 --rc genhtml_branch_coverage=1 00:08:03.293 --rc genhtml_function_coverage=1 00:08:03.293 --rc genhtml_legend=1 00:08:03.293 --rc geninfo_all_blocks=1 00:08:03.293 --rc geninfo_unexecuted_blocks=1 00:08:03.293 00:08:03.293 ' 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.293 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:03.294 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:03.294 Cannot find device "nvmf_init_br" 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:03.294 Cannot find device "nvmf_init_br2" 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:03.294 Cannot find device "nvmf_tgt_br" 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:03.294 Cannot find device "nvmf_tgt_br2" 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:03.294 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:03.294 Cannot find device "nvmf_init_br" 00:08:03.295 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:03.295 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:03.603 Cannot find device "nvmf_init_br2" 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:03.603 Cannot find device "nvmf_tgt_br" 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:03.603 Cannot find device "nvmf_tgt_br2" 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:03.603 Cannot find device "nvmf_br" 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:03.603 Cannot find device "nvmf_init_if" 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:03.603 Cannot find device "nvmf_init_if2" 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:03.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:03.603 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:03.603 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:03.604 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:03.604 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:08:03.604 00:08:03.604 --- 10.0.0.3 ping statistics --- 00:08:03.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.604 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:03.604 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:03.604 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.108 ms 00:08:03.604 00:08:03.604 --- 10.0.0.4 ping statistics --- 00:08:03.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.604 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:03.604 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:03.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:03.604 00:08:03.604 --- 10.0.0.1 ping statistics --- 00:08:03.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.604 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:03.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:08:03.863 00:08:03.863 --- 10.0.0.2 ping statistics --- 00:08:03.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.863 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=68715 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 68715 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 68715 ']' 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.863 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.863 [2024-12-05 12:11:34.576486] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:03.863 [2024-12-05 12:11:34.576602] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.863 [2024-12-05 12:11:34.729858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.120 [2024-12-05 12:11:34.787527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.120 [2024-12-05 12:11:34.787604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.120 [2024-12-05 12:11:34.787630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.120 [2024-12-05 12:11:34.787640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.120 [2024-12-05 12:11:34.787660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.120 [2024-12-05 12:11:34.788157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.120 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.120 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:04.120 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:04.120 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.120 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.121 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.121 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:04.121 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:04.121 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.121 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.121 [2024-12-05 12:11:34.980563] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.121 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.121 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:04.121 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.121 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.379 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.379 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:04.379 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.379 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.379 [2024-12-05 12:11:34.996733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:04.379 12:11:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.379 malloc0 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:04.379 { 00:08:04.379 "params": { 00:08:04.379 "name": "Nvme$subsystem", 00:08:04.379 "trtype": "$TEST_TRANSPORT", 00:08:04.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:04.379 "adrfam": "ipv4", 00:08:04.379 "trsvcid": "$NVMF_PORT", 00:08:04.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:04.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:04.379 "hdgst": ${hdgst:-false}, 00:08:04.379 "ddgst": ${ddgst:-false} 00:08:04.379 }, 00:08:04.379 "method": "bdev_nvme_attach_controller" 00:08:04.379 } 00:08:04.379 EOF 00:08:04.379 )") 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:04.379 12:11:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.379 "params": { 00:08:04.379 "name": "Nvme1", 00:08:04.379 "trtype": "tcp", 00:08:04.379 "traddr": "10.0.0.3", 00:08:04.379 "adrfam": "ipv4", 00:08:04.379 "trsvcid": "4420", 00:08:04.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.379 "hdgst": false, 00:08:04.379 "ddgst": false 00:08:04.379 }, 00:08:04.380 "method": "bdev_nvme_attach_controller" 00:08:04.380 }' 00:08:04.380 [2024-12-05 12:11:35.098231] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:04.380 [2024-12-05 12:11:35.098357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68752 ] 00:08:04.637 [2024-12-05 12:11:35.251930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.637 [2024-12-05 12:11:35.306008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.637 Running I/O for 10 seconds... 00:08:06.947 6813.00 IOPS, 53.23 MiB/s [2024-12-05T12:11:38.753Z] 6886.50 IOPS, 53.80 MiB/s [2024-12-05T12:11:39.688Z] 6924.33 IOPS, 54.10 MiB/s [2024-12-05T12:11:40.680Z] 6914.75 IOPS, 54.02 MiB/s [2024-12-05T12:11:41.616Z] 6932.00 IOPS, 54.16 MiB/s [2024-12-05T12:11:42.553Z] 6956.00 IOPS, 54.34 MiB/s [2024-12-05T12:11:43.928Z] 6971.14 IOPS, 54.46 MiB/s [2024-12-05T12:11:44.865Z] 6979.12 IOPS, 54.52 MiB/s [2024-12-05T12:11:45.800Z] 6984.00 IOPS, 54.56 MiB/s [2024-12-05T12:11:45.800Z] 6988.80 IOPS, 54.60 MiB/s 00:08:14.931 Latency(us) 00:08:14.931 [2024-12-05T12:11:45.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.931 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:14.931 Verification LBA range: start 0x0 length 0x1000 00:08:14.931 Nvme1n1 : 10.01 6991.46 54.62 0.00 0.00 18249.32 2919.33 30384.87 00:08:14.931 [2024-12-05T12:11:45.800Z] =================================================================================================================== 00:08:14.931 [2024-12-05T12:11:45.801Z] Total : 6991.46 54.62 0.00 0.00 18249.32 2919.33 30384.87 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68875 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:14.932 { 00:08:14.932 "params": { 00:08:14.932 "name": "Nvme$subsystem", 00:08:14.932 "trtype": "$TEST_TRANSPORT", 00:08:14.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.932 "adrfam": "ipv4", 00:08:14.932 "trsvcid": "$NVMF_PORT", 00:08:14.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.932 "hdgst": ${hdgst:-false}, 00:08:14.932 "ddgst": ${ddgst:-false} 00:08:14.932 }, 00:08:14.932 "method": "bdev_nvme_attach_controller" 00:08:14.932 } 00:08:14.932 EOF 00:08:14.932 )") 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:14.932 [2024-12-05 12:11:45.707539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.932 [2024-12-05 12:11:45.707608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:14.932 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:14.932 12:11:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:14.932 "params": { 00:08:14.932 "name": "Nvme1", 00:08:14.932 "trtype": "tcp", 00:08:14.932 "traddr": "10.0.0.3", 00:08:14.932 "adrfam": "ipv4", 00:08:14.932 "trsvcid": "4420", 00:08:14.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.932 "hdgst": false, 00:08:14.932 "ddgst": false 00:08:14.932 }, 00:08:14.932 "method": "bdev_nvme_attach_controller" 00:08:14.932 }' 00:08:14.932 [2024-12-05 12:11:45.719501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.932 [2024-12-05 12:11:45.719531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.932 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:14.932 [2024-12-05 12:11:45.731504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.932 [2024-12-05 12:11:45.731530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.932 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:14.932 [2024-12-05 12:11:45.743509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.932 [2024-12-05 12:11:45.743534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.932 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:14.932 [2024-12-05 12:11:45.755518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.932 [2024-12-05 12:11:45.755543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.932 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:14.932 [2024-12-05 12:11:45.763215] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:14.932 [2024-12-05 12:11:45.763333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68875 ] 00:08:14.932 [2024-12-05 12:11:45.767492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.932 [2024-12-05 12:11:45.767536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.932 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:14.932 [2024-12-05 12:11:45.779493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.932 [2024-12-05 12:11:45.779520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.932 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:14.932 [2024-12-05 12:11:45.791511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.932 [2024-12-05 12:11:45.791551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.932 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.803514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.803539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.815515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.815556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.827519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.827560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.839524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.839565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.851556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.851581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.863544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.863586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.875557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.875600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.887555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.887596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.899559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.899601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.907556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.907610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 [2024-12-05 12:11:45.910158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.915560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.915599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.923567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.923608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.935570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.935613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.192 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.192 [2024-12-05 12:11:45.943597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.192 [2024-12-05 12:11:45.943635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.193 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.193 [2024-12-05 12:11:45.955582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.193 [2024-12-05 12:11:45.955630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.193 [2024-12-05 12:11:45.958177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.193 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.193 [2024-12-05 12:11:45.967579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.193 [2024-12-05 12:11:45.967620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.193 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.193 [2024-12-05 12:11:45.979595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.193 [2024-12-05 12:11:45.979666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.193 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.193 [2024-12-05 12:11:45.991598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.193 [2024-12-05 12:11:45.991660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.193 2024/12/05 12:11:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.193 [2024-12-05 12:11:46.003598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.193 [2024-12-05 12:11:46.003660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.193 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.193 [2024-12-05 12:11:46.015597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.193 [2024-12-05 12:11:46.015656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.193 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.193 [2024-12-05 12:11:46.023571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.193 [2024-12-05 12:11:46.023595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.193 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.193 [2024-12-05 12:11:46.035602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.193 [2024-12-05 12:11:46.035630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.193 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.193 [2024-12-05 12:11:46.047608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.193 [2024-12-05 12:11:46.047650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.193 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.059595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.059636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.071620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.071679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.083632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.083675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.095638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.095682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.103662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.103704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.115630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.115688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.123621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.123648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.135661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.135690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 Running I/O for 5 seconds... 00:08:15.453 [2024-12-05 12:11:46.143615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.143641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.157587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.157639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.167706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.167759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.182291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.182371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.197443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.197496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.213226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.213303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.453 [2024-12-05 12:11:46.222768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.453 [2024-12-05 12:11:46.222850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.453 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.454 [2024-12-05 12:11:46.236720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.454 [2024-12-05 12:11:46.236774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.454 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.454 [2024-12-05 12:11:46.246161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.454 [2024-12-05 12:11:46.246209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.454 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.454 [2024-12-05 12:11:46.260316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.454 [2024-12-05 12:11:46.260377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.454 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.454 [2024-12-05 12:11:46.269851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.454 [2024-12-05 12:11:46.269899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.454 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.454 [2024-12-05 12:11:46.284129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.454 [2024-12-05 12:11:46.284181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.454 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.454 [2024-12-05 12:11:46.301491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.454 [2024-12-05 12:11:46.301544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.454 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.454 [2024-12-05 12:11:46.315974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.454 [2024-12-05 12:11:46.316026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.454 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.713 [2024-12-05 12:11:46.331914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.713 [2024-12-05 12:11:46.331967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.713 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.713 [2024-12-05 12:11:46.349226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.713 [2024-12-05 12:11:46.349305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.713 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.713 [2024-12-05 12:11:46.359062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.713 [2024-12-05 12:11:46.359115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.713 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.713 [2024-12-05 12:11:46.369351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.713 [2024-12-05 12:11:46.369402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.713 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.713 [2024-12-05 12:11:46.378765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.713 [2024-12-05 12:11:46.378841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.713 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.713 [2024-12-05 12:11:46.388710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.713 [2024-12-05 12:11:46.388762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.713 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.713 [2024-12-05 12:11:46.402995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.403047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.714 [2024-12-05 12:11:46.418911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.418963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.714 [2024-12-05 12:11:46.434981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.435035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.714 [2024-12-05 12:11:46.447319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.447400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.714 [2024-12-05 12:11:46.464148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.464198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.714 [2024-12-05 12:11:46.480011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.480065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.714 [2024-12-05 12:11:46.490018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.490067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.714 [2024-12-05 12:11:46.501051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.501107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.714 [2024-12-05 12:11:46.518069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.518107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.714 [2024-12-05 12:11:46.535265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.535338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.714 [2024-12-05 12:11:46.545385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.545435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.714 [2024-12-05 12:11:46.559374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.559424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.714 [2024-12-05 12:11:46.575933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.714 [2024-12-05 12:11:46.575985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.714 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.974 [2024-12-05 12:11:46.591113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.974 [2024-12-05 12:11:46.591188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.974 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.974 [2024-12-05 12:11:46.608460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.974 [2024-12-05 12:11:46.608515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.974 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.974 [2024-12-05 12:11:46.624079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.624133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.642487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.642541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.657142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.657178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.667405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.667456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.681385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.681436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.690178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.690225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.704688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.704740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.714162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.714210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.728037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.728090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.738036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.738088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.752277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.752358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.762042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.762099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.776332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.776366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.791764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.791798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.803097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.803379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.819762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.819953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:15.975 [2024-12-05 12:11:46.835456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.975 [2024-12-05 12:11:46.835656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.975 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.234 [2024-12-05 12:11:46.845679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.234 [2024-12-05 12:11:46.845867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.234 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.234 [2024-12-05 12:11:46.860371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.234 [2024-12-05 12:11:46.860559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.234 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.234 [2024-12-05 12:11:46.872061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.234 [2024-12-05 12:11:46.872246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.234 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.234 [2024-12-05 12:11:46.889102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.234 [2024-12-05 12:11:46.889138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.234 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.234 [2024-12-05 12:11:46.903629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.234 [2024-12-05 12:11:46.903664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.234 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.234 [2024-12-05 12:11:46.919278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.234 [2024-12-05 12:11:46.919516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.235 [2024-12-05 12:11:46.935904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.235 [2024-12-05 12:11:46.936125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.235 [2024-12-05 12:11:46.946749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.235 [2024-12-05 12:11:46.946966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.235 [2024-12-05 12:11:46.961185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.235 [2024-12-05 12:11:46.961390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.235 [2024-12-05 12:11:46.977869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.235 [2024-12-05 12:11:46.978066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.235 [2024-12-05 12:11:46.992912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.235 [2024-12-05 12:11:46.993110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.235 [2024-12-05 12:11:47.008312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.235 [2024-12-05 12:11:47.008344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.235 [2024-12-05 12:11:47.018014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.235 [2024-12-05 12:11:47.018047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.235 [2024-12-05 12:11:47.032575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.235 [2024-12-05 12:11:47.032610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.235 [2024-12-05 12:11:47.049256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.235 [2024-12-05 12:11:47.049301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.235 [2024-12-05 12:11:47.063701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.235 [2024-12-05 12:11:47.063735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.235 [2024-12-05 12:11:47.079386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.235 [2024-12-05 12:11:47.079573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.235 [2024-12-05 12:11:47.095803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.235 [2024-12-05 12:11:47.095993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.235 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.112512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.112699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.128676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.128863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 12458.00 IOPS, 97.33 MiB/s [2024-12-05T12:11:47.363Z] [2024-12-05 12:11:47.144570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.144719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.154625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.154835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.168769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.168955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.185526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.185715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.201111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.201326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.217390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.217592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.233485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.233519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.251588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.251622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.266146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.266181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.276573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.276782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.287044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.287347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.297175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.297410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.311583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.311809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.328242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.328469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.344451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.344640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.494 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.494 [2024-12-05 12:11:47.360661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.494 [2024-12-05 12:11:47.360835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.370600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.370811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.384861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.385048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.400247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.400472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.416518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.416705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.434258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.434479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.449462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.449495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.466921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.466958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.482471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.482635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.498462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.498641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.508835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.509022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.522934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.523148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.538851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.539038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.556051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.556238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.571796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.571984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.588351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.588387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.754 [2024-12-05 12:11:47.605252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.754 [2024-12-05 12:11:47.605332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.754 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:16.755 [2024-12-05 12:11:47.619649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.755 [2024-12-05 12:11:47.619694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.014 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.014 [2024-12-05 12:11:47.629380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.014 [2024-12-05 12:11:47.629562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.014 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.644043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.644227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.661264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.661480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.676461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.676647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.693441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.693630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.708578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.708781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.723512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.723699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.740788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.740970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.755233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.755268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.772122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.772156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.786277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.786521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.803855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.804040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.817624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.817817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.834595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.834782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.848912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.849100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.866361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.866550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.015 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.015 [2024-12-05 12:11:47.880857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.015 [2024-12-05 12:11:47.881043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.274 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.274 [2024-12-05 12:11:47.898019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.274 [2024-12-05 12:11:47.898205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.274 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.274 [2024-12-05 12:11:47.913051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.274 [2024-12-05 12:11:47.913237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.274 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.274 [2024-12-05 12:11:47.930750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.274 [2024-12-05 12:11:47.930967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.274 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.274 [2024-12-05 12:11:47.945138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.274 [2024-12-05 12:11:47.945174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.274 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.274 [2024-12-05 12:11:47.961440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.274 [2024-12-05 12:11:47.961473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.274 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.274 [2024-12-05 12:11:47.978607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.274 [2024-12-05 12:11:47.978819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.274 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.274 [2024-12-05 12:11:47.994482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.274 [2024-12-05 12:11:47.994661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.274 2024/12/05 12:11:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.274 [2024-12-05 12:11:48.010747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.274 [2024-12-05 12:11:48.010969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.274 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.275 [2024-12-05 12:11:48.026757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.275 [2024-12-05 12:11:48.026972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.275 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.275 [2024-12-05 12:11:48.043273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.275 [2024-12-05 12:11:48.043517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.275 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.275 [2024-12-05 12:11:48.059587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.275 [2024-12-05 12:11:48.059773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.275 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.275 [2024-12-05 12:11:48.071878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.275 [2024-12-05 12:11:48.072064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.275 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.275 [2024-12-05 12:11:48.087511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.275 [2024-12-05 12:11:48.087546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.275 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.275 [2024-12-05 12:11:48.098910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.275 [2024-12-05 12:11:48.098945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.275 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.275 [2024-12-05 12:11:48.116078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.275 [2024-12-05 12:11:48.116113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.275 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.275 [2024-12-05 12:11:48.132288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.275 [2024-12-05 12:11:48.132320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.275 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 12558.00 IOPS, 98.11 MiB/s [2024-12-05T12:11:48.403Z] [2024-12-05 12:11:48.149116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.149333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.165364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.165554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.182051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.182263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.198208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.198419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.215607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.215795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.231078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.231308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.242376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.242563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.259723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.259911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.275280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.275345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.285177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.285209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.299818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.299853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.317724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.317912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.333597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.333783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.351584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.351770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.367715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.367903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.377197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.377413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.534 [2024-12-05 12:11:48.391935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.534 [2024-12-05 12:11:48.392122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.534 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.401742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.401774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.415937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.415970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.431327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.431360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.442464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.442497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.458383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.458416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.475723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.475764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.492003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.492194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.507967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.508148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.518176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.518362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.533161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.533375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.548550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.548754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.565833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.566023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.580412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.580600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.596524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.596728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.614553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.793 [2024-12-05 12:11:48.614590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.793 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.793 [2024-12-05 12:11:48.629821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-12-05 12:11:48.629855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:17.794 [2024-12-05 12:11:48.646270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.794 [2024-12-05 12:11:48.646499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.794 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.662842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.663021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.680715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.680902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.695033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.695248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.709633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.709834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.724901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.725087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.735312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.735531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.749572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.749747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.765619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.765654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.782412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.782448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.799204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.799238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.814893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.814928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.824887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.824919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.839020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.839055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.855359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.855391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.873204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.873425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.888201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.888421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.053 [2024-12-05 12:11:48.904279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.053 [2024-12-05 12:11:48.904504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.053 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.312 [2024-12-05 12:11:48.920976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.312 [2024-12-05 12:11:48.921162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.312 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.312 [2024-12-05 12:11:48.938170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.312 [2024-12-05 12:11:48.938389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.312 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.312 [2024-12-05 12:11:48.952503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.312 [2024-12-05 12:11:48.952688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.312 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.312 [2024-12-05 12:11:48.968386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.312 [2024-12-05 12:11:48.968572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.312 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.312 [2024-12-05 12:11:48.978259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.312 [2024-12-05 12:11:48.978338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.312 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.312 [2024-12-05 12:11:48.991824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.312 [2024-12-05 12:11:48.992014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.312 2024/12/05 12:11:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.312 [2024-12-05 12:11:49.001991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.312 [2024-12-05 12:11:49.002024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.312 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.312 [2024-12-05 12:11:49.016125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.312 [2024-12-05 12:11:49.016324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.312 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.312 [2024-12-05 12:11:49.025979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.312 [2024-12-05 12:11:49.026162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.312 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.312 [2024-12-05 12:11:49.040153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.312 [2024-12-05 12:11:49.040353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.313 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.313 [2024-12-05 12:11:49.050007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.313 [2024-12-05 12:11:49.050190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.313 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.313 [2024-12-05 12:11:49.063983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.313 [2024-12-05 12:11:49.064018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.313 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.313 [2024-12-05 12:11:49.073785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.313 [2024-12-05 12:11:49.073818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.313 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.313 [2024-12-05 12:11:49.088864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.313 [2024-12-05 12:11:49.088900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.313 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.313 [2024-12-05 12:11:49.098867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.313 [2024-12-05 12:11:49.099025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.313 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.313 [2024-12-05 12:11:49.112622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.313 [2024-12-05 12:11:49.112654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.313 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.313 [2024-12-05 12:11:49.128161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.313 [2024-12-05 12:11:49.128196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.313 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.313 12568.00 IOPS, 98.19 MiB/s [2024-12-05T12:11:49.182Z] [2024-12-05 12:11:49.145117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.313 [2024-12-05 12:11:49.145148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.313 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.313 [2024-12-05 12:11:49.161382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.313 [2024-12-05 12:11:49.161417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.313 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.313 [2024-12-05 12:11:49.179403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.179594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.194252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.194473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.204407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.204593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.214715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.214909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.225561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.225764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.238599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.238827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.255553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.255738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.271538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.271726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.288784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.288819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.304027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.304062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.320381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.320590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.335702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.335890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.351072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.351317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.362544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.362730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.378472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.378643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.394234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.394460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.404095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.404268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.414749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.414982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.572 [2024-12-05 12:11:49.425276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.572 [2024-12-05 12:11:49.425338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.572 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.573 [2024-12-05 12:11:49.435570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.573 [2024-12-05 12:11:49.435606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.573 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.449849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.449883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.466144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.466179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.483562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.483747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.498162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.498383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.515189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.515380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.530520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.530687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.541401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.541550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.552367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.552548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.565292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.565353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.575027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.575075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.588142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.588177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.597397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.597429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.611138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.611385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.621333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.621535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.831 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.831 [2024-12-05 12:11:49.636113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.831 [2024-12-05 12:11:49.636341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.832 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.832 [2024-12-05 12:11:49.646212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.832 [2024-12-05 12:11:49.646444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.832 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.832 [2024-12-05 12:11:49.661093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.832 [2024-12-05 12:11:49.661304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.832 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.832 [2024-12-05 12:11:49.678541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.832 [2024-12-05 12:11:49.678577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.832 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:18.832 [2024-12-05 12:11:49.694219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.832 [2024-12-05 12:11:49.694254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.832 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.090 [2024-12-05 12:11:49.704536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.090 [2024-12-05 12:11:49.704572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.090 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.090 [2024-12-05 12:11:49.718641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.090 [2024-12-05 12:11:49.718689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.090 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.090 [2024-12-05 12:11:49.728972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.090 [2024-12-05 12:11:49.729040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.090 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.090 [2024-12-05 12:11:49.743569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.090 [2024-12-05 12:11:49.743607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.090 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.090 [2024-12-05 12:11:49.753738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.090 [2024-12-05 12:11:49.753789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.090 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.090 [2024-12-05 12:11:49.767872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.090 [2024-12-05 12:11:49.767923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.090 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.090 [2024-12-05 12:11:49.784432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.090 [2024-12-05 12:11:49.784484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.090 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.090 [2024-12-05 12:11:49.800659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.090 [2024-12-05 12:11:49.800720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.090 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.090 [2024-12-05 12:11:49.817917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.090 [2024-12-05 12:11:49.817969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.090 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.090 [2024-12-05 12:11:49.832717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.090 [2024-12-05 12:11:49.832768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.091 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.091 [2024-12-05 12:11:49.841904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.091 [2024-12-05 12:11:49.841952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.091 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.091 [2024-12-05 12:11:49.852347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.091 [2024-12-05 12:11:49.852398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.091 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.091 [2024-12-05 12:11:49.862333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.091 [2024-12-05 12:11:49.862385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.091 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.091 [2024-12-05 12:11:49.872337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.091 [2024-12-05 12:11:49.872390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.091 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.091 [2024-12-05 12:11:49.882522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.091 [2024-12-05 12:11:49.882572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.091 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.091 [2024-12-05 12:11:49.892504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.091 [2024-12-05 12:11:49.892553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.091 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.091 [2024-12-05 12:11:49.902505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.091 [2024-12-05 12:11:49.902556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.091 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.091 [2024-12-05 12:11:49.915755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.091 [2024-12-05 12:11:49.915807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.091 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.091 [2024-12-05 12:11:49.924540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.091 [2024-12-05 12:11:49.924590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.091 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.091 [2024-12-05 12:11:49.938785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.091 [2024-12-05 12:11:49.938859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.091 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.091 [2024-12-05 12:11:49.948218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.091 [2024-12-05 12:11:49.948269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.091 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:49.961340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:49.961391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:49.971170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:49.971221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:49.985435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:49.985491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:49.995303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:49.995366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:50.009046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:50.009100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:50.019490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:50.019544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:50.034198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:50.034253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:50.044272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:50.044352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:50.058168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:50.058222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:50.075739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:50.075791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:50.091205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:50.091257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:50.102382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:50.102431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:50.118749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:50.118839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:50.135149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:50.135205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 12522.25 IOPS, 97.83 MiB/s [2024-12-05T12:11:50.219Z] [2024-12-05 12:11:50.151837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:50.151890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.350 [2024-12-05 12:11:50.168791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.350 [2024-12-05 12:11:50.168843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.350 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.351 [2024-12-05 12:11:50.185551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.351 [2024-12-05 12:11:50.185604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.351 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.351 [2024-12-05 12:11:50.202090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.351 [2024-12-05 12:11:50.202142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.351 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.219974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.220027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.234250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.234326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.250091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.250144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.268210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.268262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.282155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.282204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.298198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.298252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.314828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.314880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.331430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.331481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.348896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.348950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.363783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.363836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.379502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.379553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.391820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.391872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.401667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.610 [2024-12-05 12:11:50.401715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.610 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.610 [2024-12-05 12:11:50.415571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.611 [2024-12-05 12:11:50.415623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.611 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.611 [2024-12-05 12:11:50.429485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.611 [2024-12-05 12:11:50.429537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.611 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.611 [2024-12-05 12:11:50.445363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.611 [2024-12-05 12:11:50.445399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.611 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.611 [2024-12-05 12:11:50.461711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.611 [2024-12-05 12:11:50.461765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.611 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.870 [2024-12-05 12:11:50.478981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.870 [2024-12-05 12:11:50.479032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.870 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.870 [2024-12-05 12:11:50.494625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.870 [2024-12-05 12:11:50.494677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.870 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.870 [2024-12-05 12:11:50.504648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.870 [2024-12-05 12:11:50.504698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.870 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.870 [2024-12-05 12:11:50.514763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.870 [2024-12-05 12:11:50.514840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.870 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.870 [2024-12-05 12:11:50.524729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.870 [2024-12-05 12:11:50.524780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.870 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.870 [2024-12-05 12:11:50.539205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.870 [2024-12-05 12:11:50.539269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.870 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.870 [2024-12-05 12:11:50.555909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.870 [2024-12-05 12:11:50.555961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.870 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.870 [2024-12-05 12:11:50.571411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.870 [2024-12-05 12:11:50.571464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.870 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.870 [2024-12-05 12:11:50.587553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.870 [2024-12-05 12:11:50.587606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.870 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.870 [2024-12-05 12:11:50.604751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.870 [2024-12-05 12:11:50.604804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.870 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.870 [2024-12-05 12:11:50.619840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.870 [2024-12-05 12:11:50.619902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.871 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.871 [2024-12-05 12:11:50.635799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.871 [2024-12-05 12:11:50.635851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.871 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.871 [2024-12-05 12:11:50.652631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.871 [2024-12-05 12:11:50.652687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.871 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.871 [2024-12-05 12:11:50.669443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.871 [2024-12-05 12:11:50.669494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.871 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.871 [2024-12-05 12:11:50.684079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.871 [2024-12-05 12:11:50.684130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.871 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.871 [2024-12-05 12:11:50.700321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.871 [2024-12-05 12:11:50.700372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.871 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.871 [2024-12-05 12:11:50.716954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.871 [2024-12-05 12:11:50.717006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.871 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:19.871 [2024-12-05 12:11:50.733690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.871 [2024-12-05 12:11:50.733744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.871 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.130 [2024-12-05 12:11:50.748935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.130 [2024-12-05 12:11:50.748987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.130 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.765950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.766001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.781503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.781540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.792228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.792307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.803024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.803064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.815179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.815244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.825435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.825486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.839200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.839251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.854507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.854558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.864626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.864677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.877778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.877830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.887877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.887928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.901737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.901787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.911950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.912002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.926006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.926061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.935720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.935772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.948915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.948966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.958702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.958753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.972717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.972770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.982548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.982601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.131 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.131 [2024-12-05 12:11:50.996485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.131 [2024-12-05 12:11:50.996536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.392 2024/12/05 12:11:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.392 [2024-12-05 12:11:51.005164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.392 [2024-12-05 12:11:51.005215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.392 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.392 [2024-12-05 12:11:51.019923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.392 [2024-12-05 12:11:51.019975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.392 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.392 [2024-12-05 12:11:51.029612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.392 [2024-12-05 12:11:51.029679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.392 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.392 [2024-12-05 12:11:51.045509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.392 [2024-12-05 12:11:51.045560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.392 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.392 [2024-12-05 12:11:51.063056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.392 [2024-12-05 12:11:51.063123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.392 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.392 [2024-12-05 12:11:51.077769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.392 [2024-12-05 12:11:51.077821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.392 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.392 [2024-12-05 12:11:51.092966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.392 [2024-12-05 12:11:51.093019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.392 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.392 [2024-12-05 12:11:51.102876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.392 [2024-12-05 12:11:51.102928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.392 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.392 [2024-12-05 12:11:51.117389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.392 [2024-12-05 12:11:51.117441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.392 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.392 [2024-12-05 12:11:51.128734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.393 [2024-12-05 12:11:51.128785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.393 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.393 12554.20 IOPS, 98.08 MiB/s [2024-12-05T12:11:51.262Z] [2024-12-05 12:11:51.145970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.393 [2024-12-05 12:11:51.146020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.393 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.393 00:08:20.393 Latency(us) 00:08:20.393 [2024-12-05T12:11:51.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.393 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:20.393 Nvme1n1 : 5.01 12557.16 98.10 0.00 0.00 10181.00 4349.21 21090.68 00:08:20.393 [2024-12-05T12:11:51.262Z] =================================================================================================================== 00:08:20.393 [2024-12-05T12:11:51.262Z] Total : 12557.16 98.10 0.00 0.00 10181.00 4349.21 21090.68 00:08:20.393 [2024-12-05 12:11:51.156702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.393 [2024-12-05 12:11:51.156751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.393 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.393 [2024-12-05 12:11:51.168719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.393 [2024-12-05 12:11:51.168771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.393 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.393 [2024-12-05 12:11:51.180737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.393 [2024-12-05 12:11:51.180814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.393 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.393 [2024-12-05 12:11:51.192749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.393 [2024-12-05 12:11:51.192814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.393 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.393 [2024-12-05 12:11:51.204725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.393 [2024-12-05 12:11:51.204792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.393 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.393 [2024-12-05 12:11:51.216724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.393 [2024-12-05 12:11:51.216785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.393 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.393 [2024-12-05 12:11:51.228742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.393 [2024-12-05 12:11:51.228802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.393 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.393 [2024-12-05 12:11:51.240757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.393 [2024-12-05 12:11:51.240808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.393 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.393 [2024-12-05 12:11:51.252716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.393 [2024-12-05 12:11:51.252778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.393 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.715 [2024-12-05 12:11:51.264758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.715 [2024-12-05 12:11:51.264809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.715 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.715 [2024-12-05 12:11:51.276760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.715 [2024-12-05 12:11:51.276822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.715 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.715 [2024-12-05 12:11:51.288738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.715 [2024-12-05 12:11:51.288803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.715 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.715 [2024-12-05 12:11:51.300723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.715 [2024-12-05 12:11:51.300769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.715 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.715 [2024-12-05 12:11:51.312729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.715 [2024-12-05 12:11:51.312784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.715 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.715 [2024-12-05 12:11:51.324771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.715 [2024-12-05 12:11:51.324825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.715 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.715 [2024-12-05 12:11:51.336742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.715 [2024-12-05 12:11:51.336780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.715 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.715 [2024-12-05 12:11:51.348746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.716 [2024-12-05 12:11:51.348795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.716 2024/12/05 12:11:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:20.716 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68875) - No such process 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68875 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.716 delay0 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.716 12:11:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:20.979 [2024-12-05 12:11:51.560033] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:27.537 Initializing NVMe Controllers 00:08:27.537 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:27.537 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:27.537 Initialization complete. Launching workers. 00:08:27.537 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 65 00:08:27.537 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 352, failed to submit 33 00:08:27.537 success 159, unsuccessful 193, failed 0 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.537 rmmod nvme_tcp 00:08:27.537 rmmod nvme_fabrics 00:08:27.537 rmmod nvme_keyring 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 68715 ']' 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 68715 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 68715 ']' 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 68715 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68715 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:27.537 killing process with pid 68715 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68715' 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 68715 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 68715 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:27.537 12:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:27.537 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.537 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:27.537 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:27.537 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:27.537 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:27.537 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:27.537 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:27.537 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:27.537 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:27.537 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:27.537 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:27.537 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:08:27.538 00:08:27.538 real 0m24.331s 00:08:27.538 user 0m39.412s 00:08:27.538 sys 0m6.483s 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.538 ************************************ 00:08:27.538 END TEST nvmf_zcopy 00:08:27.538 ************************************ 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.538 ************************************ 00:08:27.538 START TEST nvmf_nmic 00:08:27.538 ************************************ 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:27.538 * Looking for test storage... 00:08:27.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:27.538 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:27.797 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:27.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.798 --rc genhtml_branch_coverage=1 00:08:27.798 --rc genhtml_function_coverage=1 00:08:27.798 --rc genhtml_legend=1 00:08:27.798 --rc geninfo_all_blocks=1 00:08:27.798 --rc geninfo_unexecuted_blocks=1 00:08:27.798 00:08:27.798 ' 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:27.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.798 --rc genhtml_branch_coverage=1 00:08:27.798 --rc genhtml_function_coverage=1 00:08:27.798 --rc genhtml_legend=1 00:08:27.798 --rc geninfo_all_blocks=1 00:08:27.798 --rc geninfo_unexecuted_blocks=1 00:08:27.798 00:08:27.798 ' 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:27.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.798 --rc genhtml_branch_coverage=1 00:08:27.798 --rc genhtml_function_coverage=1 00:08:27.798 --rc genhtml_legend=1 00:08:27.798 --rc geninfo_all_blocks=1 00:08:27.798 --rc geninfo_unexecuted_blocks=1 00:08:27.798 00:08:27.798 ' 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:27.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.798 --rc genhtml_branch_coverage=1 00:08:27.798 --rc genhtml_function_coverage=1 00:08:27.798 --rc genhtml_legend=1 00:08:27.798 --rc geninfo_all_blocks=1 00:08:27.798 --rc geninfo_unexecuted_blocks=1 00:08:27.798 00:08:27.798 ' 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.798 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.798 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:27.799 Cannot find device "nvmf_init_br" 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:27.799 Cannot find device "nvmf_init_br2" 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:27.799 Cannot find device "nvmf_tgt_br" 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.799 Cannot find device "nvmf_tgt_br2" 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:27.799 Cannot find device "nvmf_init_br" 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:27.799 Cannot find device "nvmf_init_br2" 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:27.799 Cannot find device "nvmf_tgt_br" 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:27.799 Cannot find device "nvmf_tgt_br2" 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:27.799 Cannot find device "nvmf_br" 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:27.799 Cannot find device "nvmf_init_if" 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:27.799 Cannot find device "nvmf_init_if2" 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:27.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:27.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:27.799 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:28.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:28.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:08:28.058 00:08:28.058 --- 10.0.0.3 ping statistics --- 00:08:28.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.058 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:28.058 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:28.058 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:08:28.058 00:08:28.058 --- 10.0.0.4 ping statistics --- 00:08:28.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.058 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:28.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:28.058 00:08:28.058 --- 10.0.0.1 ping statistics --- 00:08:28.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.058 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:28.058 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:28.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:08:28.059 00:08:28.059 --- 10.0.0.2 ping statistics --- 00:08:28.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.059 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=69252 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 69252 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 69252 ']' 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.059 12:11:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.317 [2024-12-05 12:11:58.956951] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:28.317 [2024-12-05 12:11:58.957062] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.317 [2024-12-05 12:11:59.104716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.317 [2024-12-05 12:11:59.150941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.317 [2024-12-05 12:11:59.151004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.317 [2024-12-05 12:11:59.151015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.317 [2024-12-05 12:11:59.151023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.317 [2024-12-05 12:11:59.151030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.317 [2024-12-05 12:11:59.152271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.317 [2024-12-05 12:11:59.152433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.317 [2024-12-05 12:11:59.152559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.317 [2024-12-05 12:11:59.152558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.575 [2024-12-05 12:11:59.331591] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.575 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.576 Malloc0 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.576 [2024-12-05 12:11:59.397854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:28.576 test case1: single bdev can't be used in multiple subsystems 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.576 [2024-12-05 12:11:59.425667] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:28.576 [2024-12-05 12:11:59.425706] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:28.576 [2024-12-05 12:11:59.425733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.576 2024/12/05 12:11:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:28.576 request: 00:08:28.576 { 00:08:28.576 "method": "nvmf_subsystem_add_ns", 00:08:28.576 "params": { 00:08:28.576 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:28.576 "namespace": { 00:08:28.576 "bdev_name": "Malloc0", 00:08:28.576 "no_auto_visible": false, 00:08:28.576 "hide_metadata": false 00:08:28.576 } 00:08:28.576 } 00:08:28.576 } 00:08:28.576 Got JSON-RPC error response 00:08:28.576 GoRPCClient: error on JSON-RPC call 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:28.576 Adding namespace failed - expected result. 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:28.576 test case2: host connect to nvmf target in multiple paths 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.576 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:28.576 [2024-12-05 12:11:59.441752] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:08:28.835 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.835 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:28.835 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:08:29.094 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:29.094 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:29.094 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:29.094 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:29.094 12:11:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:30.998 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:30.998 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:30.998 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:30.998 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:30.998 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:30.998 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:30.998 12:12:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:30.998 [global] 00:08:30.998 thread=1 00:08:30.998 invalidate=1 00:08:30.998 rw=write 00:08:30.998 time_based=1 00:08:30.998 runtime=1 00:08:30.998 ioengine=libaio 00:08:30.998 direct=1 00:08:30.998 bs=4096 00:08:30.998 iodepth=1 00:08:30.998 norandommap=0 00:08:30.998 numjobs=1 00:08:30.998 00:08:30.998 verify_dump=1 00:08:30.998 verify_backlog=512 00:08:30.998 verify_state_save=0 00:08:30.998 do_verify=1 00:08:30.998 verify=crc32c-intel 00:08:30.998 [job0] 00:08:30.998 filename=/dev/nvme0n1 00:08:30.998 Could not set queue depth (nvme0n1) 00:08:31.256 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:31.256 fio-3.35 00:08:31.256 Starting 1 thread 00:08:32.631 00:08:32.631 job0: (groupid=0, jobs=1): err= 0: pid=69348: Thu Dec 5 12:12:03 2024 00:08:32.631 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:08:32.631 slat (nsec): min=12817, max=69795, avg=16608.45, stdev=4965.50 00:08:32.631 clat (usec): min=118, max=5367, avg=156.90, stdev=137.75 00:08:32.631 lat (usec): min=131, max=5420, avg=173.51, stdev=138.58 00:08:32.631 clat percentiles (usec): 00:08:32.631 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 135], 00:08:32.631 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:08:32.631 | 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 180], 95.00th=[ 188], 00:08:32.631 | 99.00th=[ 212], 99.50th=[ 251], 99.90th=[ 783], 99.95th=[ 4080], 00:08:32.631 | 99.99th=[ 5342] 00:08:32.631 write: IOPS=3238, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:08:32.631 slat (usec): min=18, max=111, avg=25.47, stdev= 8.25 00:08:32.631 clat (usec): min=77, max=3966, avg=115.10, stdev=117.11 00:08:32.631 lat (usec): min=103, max=4017, avg=140.57, stdev=118.39 00:08:32.631 clat percentiles (usec): 00:08:32.631 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 97], 00:08:32.631 | 30.00th=[ 102], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 113], 00:08:32.631 | 70.00th=[ 117], 80.00th=[ 124], 90.00th=[ 135], 95.00th=[ 143], 00:08:32.631 | 99.00th=[ 159], 99.50th=[ 172], 99.90th=[ 635], 99.95th=[ 3949], 00:08:32.631 | 99.99th=[ 3982] 00:08:32.631 bw ( KiB/s): min=12288, max=12288, per=94.85%, avg=12288.00, stdev= 0.00, samples=1 00:08:32.631 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:32.631 lat (usec) : 100=13.60%, 250=86.03%, 500=0.22%, 750=0.03%, 1000=0.02% 00:08:32.631 lat (msec) : 4=0.06%, 10=0.03% 00:08:32.631 cpu : usr=2.40%, sys=10.30%, ctx=6314, majf=0, minf=5 00:08:32.631 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:32.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.632 issued rwts: total=3072,3242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.632 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:32.632 00:08:32.632 Run status group 0 (all jobs): 00:08:32.632 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:08:32.632 WRITE: bw=12.7MiB/s (13.3MB/s), 12.7MiB/s-12.7MiB/s (13.3MB/s-13.3MB/s), io=12.7MiB (13.3MB), run=1001-1001msec 00:08:32.632 00:08:32.632 Disk stats (read/write): 00:08:32.632 nvme0n1: ios=2610/3072, merge=0/0, ticks=432/402, in_queue=834, util=90.48% 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:32.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.632 rmmod nvme_tcp 00:08:32.632 rmmod nvme_fabrics 00:08:32.632 rmmod nvme_keyring 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 69252 ']' 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 69252 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 69252 ']' 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 69252 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69252 00:08:32.632 killing process with pid 69252 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69252' 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 69252 00:08:32.632 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 69252 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:32.890 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:08:33.149 00:08:33.149 real 0m5.529s 00:08:33.149 user 0m17.026s 00:08:33.149 sys 0m1.529s 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:33.149 ************************************ 00:08:33.149 END TEST nvmf_nmic 00:08:33.149 ************************************ 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.149 ************************************ 00:08:33.149 START TEST nvmf_fio_target 00:08:33.149 ************************************ 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:33.149 * Looking for test storage... 00:08:33.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.149 12:12:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:33.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.422 --rc genhtml_branch_coverage=1 00:08:33.422 --rc genhtml_function_coverage=1 00:08:33.422 --rc genhtml_legend=1 00:08:33.422 --rc geninfo_all_blocks=1 00:08:33.422 --rc geninfo_unexecuted_blocks=1 00:08:33.422 00:08:33.422 ' 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:33.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.422 --rc genhtml_branch_coverage=1 00:08:33.422 --rc genhtml_function_coverage=1 00:08:33.422 --rc genhtml_legend=1 00:08:33.422 --rc geninfo_all_blocks=1 00:08:33.422 --rc geninfo_unexecuted_blocks=1 00:08:33.422 00:08:33.422 ' 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:33.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.422 --rc genhtml_branch_coverage=1 00:08:33.422 --rc genhtml_function_coverage=1 00:08:33.422 --rc genhtml_legend=1 00:08:33.422 --rc geninfo_all_blocks=1 00:08:33.422 --rc geninfo_unexecuted_blocks=1 00:08:33.422 00:08:33.422 ' 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:33.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.422 --rc genhtml_branch_coverage=1 00:08:33.422 --rc genhtml_function_coverage=1 00:08:33.422 --rc genhtml_legend=1 00:08:33.422 --rc geninfo_all_blocks=1 00:08:33.422 --rc geninfo_unexecuted_blocks=1 00:08:33.422 00:08:33.422 ' 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:08:33.422 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.423 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:33.423 Cannot find device "nvmf_init_br" 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:33.423 Cannot find device "nvmf_init_br2" 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:33.423 Cannot find device "nvmf_tgt_br" 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.423 Cannot find device "nvmf_tgt_br2" 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:33.423 Cannot find device "nvmf_init_br" 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:33.423 Cannot find device "nvmf_init_br2" 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:33.423 Cannot find device "nvmf_tgt_br" 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:33.423 Cannot find device "nvmf_tgt_br2" 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:33.423 Cannot find device "nvmf_br" 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:33.423 Cannot find device "nvmf_init_if" 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:33.423 Cannot find device "nvmf_init_if2" 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:08:33.423 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:33.424 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:33.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:33.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:08:33.696 00:08:33.696 --- 10.0.0.3 ping statistics --- 00:08:33.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.696 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:33.696 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:33.697 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:33.697 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:08:33.697 00:08:33.697 --- 10.0.0.4 ping statistics --- 00:08:33.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.697 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:33.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:08:33.697 00:08:33.697 --- 10.0.0.1 ping statistics --- 00:08:33.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.697 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:33.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:08:33.697 00:08:33.697 --- 10.0.0.2 ping statistics --- 00:08:33.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.697 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=69578 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 69578 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 69578 ']' 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.697 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:33.697 [2024-12-05 12:12:04.493937] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:33.697 [2024-12-05 12:12:04.494023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.956 [2024-12-05 12:12:04.632442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.956 [2024-12-05 12:12:04.676812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.956 [2024-12-05 12:12:04.676885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.956 [2024-12-05 12:12:04.676911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.956 [2024-12-05 12:12:04.676918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.956 [2024-12-05 12:12:04.676924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.956 [2024-12-05 12:12:04.678139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.956 [2024-12-05 12:12:04.678240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.956 [2024-12-05 12:12:04.678355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.956 [2024-12-05 12:12:04.678356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.956 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.956 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:33.956 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.956 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:33.956 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:34.214 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.214 12:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:34.471 [2024-12-05 12:12:05.124470] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.471 12:12:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.728 12:12:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:34.728 12:12:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:34.986 12:12:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:34.986 12:12:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:35.244 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:35.244 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:35.812 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:35.812 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:35.812 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.380 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:36.380 12:12:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.639 12:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:36.639 12:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.898 12:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:36.898 12:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:37.157 12:12:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:37.416 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:37.416 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:37.674 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:37.674 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:37.933 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:38.192 [2024-12-05 12:12:08.839387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:38.192 12:12:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:38.450 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:38.708 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:38.708 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:38.708 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:38.708 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:38.708 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:38.708 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:38.708 12:12:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:41.253 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:41.253 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:41.253 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:41.253 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:41.253 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:41.253 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:41.253 12:12:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:41.253 [global] 00:08:41.253 thread=1 00:08:41.253 invalidate=1 00:08:41.253 rw=write 00:08:41.253 time_based=1 00:08:41.253 runtime=1 00:08:41.253 ioengine=libaio 00:08:41.253 direct=1 00:08:41.253 bs=4096 00:08:41.253 iodepth=1 00:08:41.253 norandommap=0 00:08:41.253 numjobs=1 00:08:41.253 00:08:41.253 verify_dump=1 00:08:41.253 verify_backlog=512 00:08:41.253 verify_state_save=0 00:08:41.253 do_verify=1 00:08:41.253 verify=crc32c-intel 00:08:41.253 [job0] 00:08:41.253 filename=/dev/nvme0n1 00:08:41.253 [job1] 00:08:41.253 filename=/dev/nvme0n2 00:08:41.253 [job2] 00:08:41.253 filename=/dev/nvme0n3 00:08:41.253 [job3] 00:08:41.253 filename=/dev/nvme0n4 00:08:41.253 Could not set queue depth (nvme0n1) 00:08:41.253 Could not set queue depth (nvme0n2) 00:08:41.253 Could not set queue depth (nvme0n3) 00:08:41.253 Could not set queue depth (nvme0n4) 00:08:41.253 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:41.253 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:41.253 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:41.253 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:41.253 fio-3.35 00:08:41.253 Starting 4 threads 00:08:42.186 00:08:42.186 job0: (groupid=0, jobs=1): err= 0: pid=69862: Thu Dec 5 12:12:12 2024 00:08:42.186 read: IOPS=2867, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec) 00:08:42.186 slat (nsec): min=12987, max=47829, avg=15492.82, stdev=3757.56 00:08:42.186 clat (usec): min=137, max=214, avg=167.00, stdev=13.68 00:08:42.186 lat (usec): min=150, max=232, avg=182.49, stdev=14.12 00:08:42.186 clat percentiles (usec): 00:08:42.186 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:08:42.186 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:08:42.186 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 192], 00:08:42.186 | 99.00th=[ 206], 99.50th=[ 208], 99.90th=[ 215], 99.95th=[ 215], 00:08:42.186 | 99.99th=[ 215] 00:08:42.186 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:42.186 slat (nsec): min=18574, max=70931, avg=22806.08, stdev=5421.82 00:08:42.186 clat (usec): min=99, max=750, avg=129.02, stdev=18.52 00:08:42.187 lat (usec): min=118, max=773, avg=151.82, stdev=19.69 00:08:42.187 clat percentiles (usec): 00:08:42.187 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 118], 00:08:42.187 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 130], 00:08:42.187 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 155], 00:08:42.187 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 190], 99.95th=[ 469], 00:08:42.187 | 99.99th=[ 750] 00:08:42.187 bw ( KiB/s): min=12288, max=12288, per=26.41%, avg=12288.00, stdev= 0.00, samples=1 00:08:42.187 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:42.187 lat (usec) : 100=0.02%, 250=99.95%, 500=0.02%, 1000=0.02% 00:08:42.187 cpu : usr=2.20%, sys=8.70%, ctx=5942, majf=0, minf=9 00:08:42.187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.187 issued rwts: total=2870,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.187 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.187 job1: (groupid=0, jobs=1): err= 0: pid=69863: Thu Dec 5 12:12:12 2024 00:08:42.187 read: IOPS=2941, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec) 00:08:42.187 slat (nsec): min=12496, max=42765, avg=14842.86, stdev=3449.66 00:08:42.187 clat (usec): min=134, max=617, avg=165.03, stdev=15.71 00:08:42.187 lat (usec): min=148, max=633, avg=179.87, stdev=15.94 00:08:42.187 clat percentiles (usec): 00:08:42.187 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:08:42.187 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:08:42.187 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 190], 00:08:42.187 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 221], 99.95th=[ 227], 00:08:42.187 | 99.99th=[ 619] 00:08:42.187 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:42.187 slat (usec): min=18, max=109, avg=22.13, stdev= 5.60 00:08:42.187 clat (usec): min=99, max=213, avg=127.94, stdev=13.44 00:08:42.187 lat (usec): min=118, max=280, avg=150.07, stdev=15.24 00:08:42.187 clat percentiles (usec): 00:08:42.187 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 118], 00:08:42.187 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 129], 00:08:42.187 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 153], 00:08:42.187 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 184], 99.95th=[ 190], 00:08:42.187 | 99.99th=[ 215] 00:08:42.187 bw ( KiB/s): min=12288, max=12288, per=26.41%, avg=12288.00, stdev= 0.00, samples=1 00:08:42.187 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:42.187 lat (usec) : 100=0.02%, 250=99.97%, 750=0.02% 00:08:42.187 cpu : usr=3.10%, sys=7.60%, ctx=6017, majf=0, minf=5 00:08:42.187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.187 issued rwts: total=2944,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.187 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.187 job2: (groupid=0, jobs=1): err= 0: pid=69864: Thu Dec 5 12:12:12 2024 00:08:42.187 read: IOPS=2492, BW=9970KiB/s (10.2MB/s)(9980KiB/1001msec) 00:08:42.187 slat (nsec): min=13607, max=49964, avg=15960.34, stdev=3944.66 00:08:42.187 clat (usec): min=167, max=1649, avg=201.36, stdev=36.67 00:08:42.187 lat (usec): min=182, max=1664, avg=217.32, stdev=36.81 00:08:42.187 clat percentiles (usec): 00:08:42.187 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:08:42.187 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:08:42.187 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 229], 00:08:42.187 | 99.00th=[ 241], 99.50th=[ 245], 99.90th=[ 404], 99.95th=[ 979], 00:08:42.187 | 99.99th=[ 1647] 00:08:42.187 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:42.187 slat (nsec): min=19571, max=96030, avg=23430.35, stdev=5857.22 00:08:42.187 clat (usec): min=125, max=248, avg=152.05, stdev=14.16 00:08:42.187 lat (usec): min=145, max=269, avg=175.48, stdev=15.19 00:08:42.187 clat percentiles (usec): 00:08:42.187 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:08:42.187 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:08:42.187 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 180], 00:08:42.187 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 208], 99.95th=[ 212], 00:08:42.187 | 99.99th=[ 249] 00:08:42.187 bw ( KiB/s): min=12288, max=12288, per=26.41%, avg=12288.00, stdev= 0.00, samples=1 00:08:42.187 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:42.187 lat (usec) : 250=99.86%, 500=0.10%, 1000=0.02% 00:08:42.187 lat (msec) : 2=0.02% 00:08:42.187 cpu : usr=1.50%, sys=8.00%, ctx=5056, majf=0, minf=15 00:08:42.187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.187 issued rwts: total=2495,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.187 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.187 job3: (groupid=0, jobs=1): err= 0: pid=69865: Thu Dec 5 12:12:12 2024 00:08:42.187 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:08:42.187 slat (nsec): min=12878, max=47847, avg=15589.86, stdev=3724.05 00:08:42.187 clat (usec): min=144, max=469, avg=176.07, stdev=17.60 00:08:42.187 lat (usec): min=157, max=487, avg=191.66, stdev=17.98 00:08:42.187 clat percentiles (usec): 00:08:42.187 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:08:42.187 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:08:42.187 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:08:42.187 | 99.00th=[ 215], 99.50th=[ 225], 99.90th=[ 343], 99.95th=[ 457], 00:08:42.187 | 99.99th=[ 469] 00:08:42.187 write: IOPS=2935, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec); 0 zone resets 00:08:42.187 slat (nsec): min=18580, max=69367, avg=23760.68, stdev=6018.23 00:08:42.187 clat (usec): min=109, max=5813, avg=146.30, stdev=150.62 00:08:42.187 lat (usec): min=129, max=5833, avg=170.06, stdev=150.88 00:08:42.187 clat percentiles (usec): 00:08:42.187 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 124], 20.00th=[ 128], 00:08:42.187 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:08:42.187 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 165], 00:08:42.187 | 99.00th=[ 190], 99.50th=[ 265], 99.90th=[ 3687], 99.95th=[ 3720], 00:08:42.187 | 99.99th=[ 5800] 00:08:42.187 bw ( KiB/s): min=12288, max=12288, per=26.41%, avg=12288.00, stdev= 0.00, samples=1 00:08:42.187 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:42.187 lat (usec) : 250=99.60%, 500=0.22%, 750=0.04%, 1000=0.02% 00:08:42.187 lat (msec) : 2=0.07%, 4=0.04%, 10=0.02% 00:08:42.187 cpu : usr=2.20%, sys=8.20%, ctx=5501, majf=0, minf=9 00:08:42.187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:42.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:42.187 issued rwts: total=2560,2938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:42.187 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:42.187 00:08:42.187 Run status group 0 (all jobs): 00:08:42.187 READ: bw=42.4MiB/s (44.5MB/s), 9970KiB/s-11.5MiB/s (10.2MB/s-12.0MB/s), io=42.5MiB (44.5MB), run=1001-1001msec 00:08:42.187 WRITE: bw=45.4MiB/s (47.6MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=45.5MiB (47.7MB), run=1001-1001msec 00:08:42.187 00:08:42.187 Disk stats (read/write): 00:08:42.187 nvme0n1: ios=2579/2560, merge=0/0, ticks=463/359, in_queue=822, util=87.78% 00:08:42.187 nvme0n2: ios=2600/2603, merge=0/0, ticks=445/360, in_queue=805, util=88.25% 00:08:42.187 nvme0n3: ios=2101/2328, merge=0/0, ticks=509/369, in_queue=878, util=93.38% 00:08:42.187 nvme0n4: ios=2218/2560, merge=0/0, ticks=457/381, in_queue=838, util=92.61% 00:08:42.187 12:12:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:42.187 [global] 00:08:42.187 thread=1 00:08:42.187 invalidate=1 00:08:42.187 rw=randwrite 00:08:42.187 time_based=1 00:08:42.187 runtime=1 00:08:42.187 ioengine=libaio 00:08:42.187 direct=1 00:08:42.187 bs=4096 00:08:42.187 iodepth=1 00:08:42.187 norandommap=0 00:08:42.187 numjobs=1 00:08:42.187 00:08:42.187 verify_dump=1 00:08:42.187 verify_backlog=512 00:08:42.187 verify_state_save=0 00:08:42.187 do_verify=1 00:08:42.187 verify=crc32c-intel 00:08:42.187 [job0] 00:08:42.187 filename=/dev/nvme0n1 00:08:42.187 [job1] 00:08:42.187 filename=/dev/nvme0n2 00:08:42.187 [job2] 00:08:42.187 filename=/dev/nvme0n3 00:08:42.187 [job3] 00:08:42.187 filename=/dev/nvme0n4 00:08:42.187 Could not set queue depth (nvme0n1) 00:08:42.187 Could not set queue depth (nvme0n2) 00:08:42.187 Could not set queue depth (nvme0n3) 00:08:42.187 Could not set queue depth (nvme0n4) 00:08:42.445 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:42.445 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:42.445 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:42.445 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:42.445 fio-3.35 00:08:42.445 Starting 4 threads 00:08:43.820 00:08:43.820 job0: (groupid=0, jobs=1): err= 0: pid=69924: Thu Dec 5 12:12:14 2024 00:08:43.820 read: IOPS=1646, BW=6585KiB/s (6743kB/s)(6592KiB/1001msec) 00:08:43.820 slat (nsec): min=10283, max=60828, avg=11861.31, stdev=3812.79 00:08:43.820 clat (usec): min=240, max=397, avg=286.28, stdev=20.83 00:08:43.820 lat (usec): min=251, max=413, avg=298.14, stdev=21.20 00:08:43.820 clat percentiles (usec): 00:08:43.820 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:08:43.820 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:08:43.820 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 322], 00:08:43.820 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 388], 99.95th=[ 400], 00:08:43.820 | 99.99th=[ 400] 00:08:43.820 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:43.820 slat (nsec): min=13125, max=92557, avg=20709.72, stdev=6150.22 00:08:43.820 clat (usec): min=106, max=1571, avg=225.02, stdev=36.93 00:08:43.820 lat (usec): min=137, max=1590, avg=245.72, stdev=37.00 00:08:43.820 clat percentiles (usec): 00:08:43.820 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:08:43.820 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:08:43.820 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 265], 00:08:43.820 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 318], 99.95th=[ 338], 00:08:43.820 | 99.99th=[ 1565] 00:08:43.820 bw ( KiB/s): min= 8175, max= 8175, per=20.26%, avg=8175.00, stdev= 0.00, samples=1 00:08:43.820 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:08:43.820 lat (usec) : 250=48.57%, 500=51.41% 00:08:43.820 lat (msec) : 2=0.03% 00:08:43.820 cpu : usr=1.20%, sys=5.10%, ctx=3697, majf=0, minf=13 00:08:43.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.820 issued rwts: total=1648,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.820 job1: (groupid=0, jobs=1): err= 0: pid=69925: Thu Dec 5 12:12:14 2024 00:08:43.820 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:08:43.820 slat (usec): min=12, max=274, avg=15.48, stdev= 7.74 00:08:43.820 clat (usec): min=29, max=826, avg=191.83, stdev=28.27 00:08:43.820 lat (usec): min=171, max=847, avg=207.31, stdev=28.69 00:08:43.820 clat percentiles (usec): 00:08:43.820 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:08:43.820 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:08:43.820 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 225], 00:08:43.820 | 99.00th=[ 243], 99.50th=[ 255], 99.90th=[ 652], 99.95th=[ 693], 00:08:43.820 | 99.99th=[ 824] 00:08:43.820 write: IOPS=2925, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec); 0 zone resets 00:08:43.820 slat (nsec): min=17608, max=78227, avg=22290.29, stdev=6254.92 00:08:43.820 clat (usec): min=111, max=207, avg=134.82, stdev=16.15 00:08:43.820 lat (usec): min=129, max=248, avg=157.11, stdev=17.89 00:08:43.820 clat percentiles (usec): 00:08:43.820 | 1.00th=[ 116], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 122], 00:08:43.820 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 135], 00:08:43.820 | 70.00th=[ 141], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 169], 00:08:43.820 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 198], 99.95th=[ 206], 00:08:43.820 | 99.99th=[ 208] 00:08:43.820 bw ( KiB/s): min=12263, max=12263, per=30.40%, avg=12263.00, stdev= 0.00, samples=1 00:08:43.820 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:08:43.820 lat (usec) : 50=0.02%, 250=99.67%, 500=0.24%, 750=0.05%, 1000=0.02% 00:08:43.820 cpu : usr=1.30%, sys=8.80%, ctx=5507, majf=0, minf=5 00:08:43.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.820 issued rwts: total=2560,2928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.820 job2: (groupid=0, jobs=1): err= 0: pid=69926: Thu Dec 5 12:12:14 2024 00:08:43.820 read: IOPS=2627, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:08:43.820 slat (nsec): min=12464, max=64977, avg=15578.44, stdev=4596.28 00:08:43.820 clat (usec): min=142, max=1520, avg=178.35, stdev=41.34 00:08:43.820 lat (usec): min=155, max=1548, avg=193.93, stdev=42.18 00:08:43.820 clat percentiles (usec): 00:08:43.820 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:08:43.820 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:08:43.820 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 210], 00:08:43.820 | 99.00th=[ 285], 99.50th=[ 338], 99.90th=[ 701], 99.95th=[ 889], 00:08:43.820 | 99.99th=[ 1516] 00:08:43.820 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:43.820 slat (usec): min=17, max=103, avg=22.63, stdev= 6.67 00:08:43.820 clat (usec): min=104, max=317, avg=133.88, stdev=15.87 00:08:43.820 lat (usec): min=123, max=343, avg=156.50, stdev=18.02 00:08:43.820 clat percentiles (usec): 00:08:43.820 | 1.00th=[ 113], 5.00th=[ 117], 10.00th=[ 119], 20.00th=[ 122], 00:08:43.820 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 133], 00:08:43.820 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 165], 00:08:43.820 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 200], 99.95th=[ 302], 00:08:43.820 | 99.99th=[ 318] 00:08:43.820 bw ( KiB/s): min=12263, max=12263, per=30.40%, avg=12263.00, stdev= 0.00, samples=1 00:08:43.820 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:08:43.820 lat (usec) : 250=99.26%, 500=0.65%, 750=0.05%, 1000=0.02% 00:08:43.820 lat (msec) : 2=0.02% 00:08:43.820 cpu : usr=2.20%, sys=8.20%, ctx=5702, majf=0, minf=11 00:08:43.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.820 issued rwts: total=2630,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.820 job3: (groupid=0, jobs=1): err= 0: pid=69927: Thu Dec 5 12:12:14 2024 00:08:43.820 read: IOPS=1647, BW=6589KiB/s (6748kB/s)(6596KiB/1001msec) 00:08:43.820 slat (nsec): min=12192, max=61972, avg=14265.80, stdev=4308.02 00:08:43.820 clat (usec): min=132, max=346, avg=283.58, stdev=20.39 00:08:43.820 lat (usec): min=152, max=364, avg=297.84, stdev=20.78 00:08:43.820 clat percentiles (usec): 00:08:43.820 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:08:43.820 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:08:43.820 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:08:43.820 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 347], 00:08:43.820 | 99.99th=[ 347] 00:08:43.820 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:43.820 slat (usec): min=16, max=111, avg=21.03, stdev= 6.82 00:08:43.820 clat (usec): min=110, max=1493, avg=224.64, stdev=34.89 00:08:43.820 lat (usec): min=157, max=1513, avg=245.68, stdev=35.09 00:08:43.820 clat percentiles (usec): 00:08:43.820 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 206], 00:08:43.820 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:08:43.820 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 262], 00:08:43.820 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 293], 99.95th=[ 314], 00:08:43.820 | 99.99th=[ 1500] 00:08:43.820 bw ( KiB/s): min= 8192, max= 8192, per=20.31%, avg=8192.00, stdev= 0.00, samples=1 00:08:43.820 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:43.820 lat (usec) : 250=49.18%, 500=50.80% 00:08:43.820 lat (msec) : 2=0.03% 00:08:43.820 cpu : usr=1.70%, sys=4.70%, ctx=3698, majf=0, minf=15 00:08:43.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.820 issued rwts: total=1649,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.820 00:08:43.820 Run status group 0 (all jobs): 00:08:43.820 READ: bw=33.1MiB/s (34.7MB/s), 6585KiB/s-10.3MiB/s (6743kB/s-10.8MB/s), io=33.2MiB (34.8MB), run=1001-1001msec 00:08:43.820 WRITE: bw=39.4MiB/s (41.3MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.4MiB (41.4MB), run=1001-1001msec 00:08:43.820 00:08:43.820 Disk stats (read/write): 00:08:43.821 nvme0n1: ios=1586/1667, merge=0/0, ticks=439/391, in_queue=830, util=88.78% 00:08:43.821 nvme0n2: ios=2263/2560, merge=0/0, ticks=468/376, in_queue=844, util=89.81% 00:08:43.821 nvme0n3: ios=2378/2560, merge=0/0, ticks=433/363, in_queue=796, util=89.36% 00:08:43.821 nvme0n4: ios=1536/1667, merge=0/0, ticks=440/369, in_queue=809, util=89.92% 00:08:43.821 12:12:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:43.821 [global] 00:08:43.821 thread=1 00:08:43.821 invalidate=1 00:08:43.821 rw=write 00:08:43.821 time_based=1 00:08:43.821 runtime=1 00:08:43.821 ioengine=libaio 00:08:43.821 direct=1 00:08:43.821 bs=4096 00:08:43.821 iodepth=128 00:08:43.821 norandommap=0 00:08:43.821 numjobs=1 00:08:43.821 00:08:43.821 verify_dump=1 00:08:43.821 verify_backlog=512 00:08:43.821 verify_state_save=0 00:08:43.821 do_verify=1 00:08:43.821 verify=crc32c-intel 00:08:43.821 [job0] 00:08:43.821 filename=/dev/nvme0n1 00:08:43.821 [job1] 00:08:43.821 filename=/dev/nvme0n2 00:08:43.821 [job2] 00:08:43.821 filename=/dev/nvme0n3 00:08:43.821 [job3] 00:08:43.821 filename=/dev/nvme0n4 00:08:43.821 Could not set queue depth (nvme0n1) 00:08:43.821 Could not set queue depth (nvme0n2) 00:08:43.821 Could not set queue depth (nvme0n3) 00:08:43.821 Could not set queue depth (nvme0n4) 00:08:43.821 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:43.821 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:43.821 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:43.821 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:43.821 fio-3.35 00:08:43.821 Starting 4 threads 00:08:44.782 00:08:44.782 job0: (groupid=0, jobs=1): err= 0: pid=69987: Thu Dec 5 12:12:15 2024 00:08:44.782 read: IOPS=5264, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1005msec) 00:08:44.782 slat (usec): min=5, max=12945, avg=94.38, stdev=520.98 00:08:44.782 clat (usec): min=1958, max=37624, avg=12048.77, stdev=2824.48 00:08:44.782 lat (usec): min=8438, max=37640, avg=12143.15, stdev=2855.25 00:08:44.782 clat percentiles (usec): 00:08:44.782 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11207], 00:08:44.782 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:08:44.782 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12911], 95.00th=[14877], 00:08:44.782 | 99.00th=[27919], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:08:44.782 | 99.99th=[37487] 00:08:44.782 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:08:44.782 slat (usec): min=10, max=6283, avg=81.95, stdev=355.87 00:08:44.782 clat (usec): min=7756, max=24845, avg=11248.73, stdev=1600.10 00:08:44.782 lat (usec): min=7771, max=24861, avg=11330.69, stdev=1590.62 00:08:44.782 clat percentiles (usec): 00:08:44.782 | 1.00th=[ 8225], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10552], 00:08:44.782 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:08:44.783 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12387], 95.00th=[13435], 00:08:44.783 | 99.00th=[14615], 99.50th=[22152], 99.90th=[24773], 99.95th=[24773], 00:08:44.783 | 99.99th=[24773] 00:08:44.783 bw ( KiB/s): min=21488, max=23568, per=35.03%, avg=22528.00, stdev=1470.78, samples=2 00:08:44.783 iops : min= 5372, max= 5892, avg=5632.00, stdev=367.70, samples=2 00:08:44.783 lat (msec) : 2=0.01%, 10=9.16%, 20=89.49%, 50=1.34% 00:08:44.783 cpu : usr=5.08%, sys=13.94%, ctx=513, majf=0, minf=9 00:08:44.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:44.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:44.783 issued rwts: total=5291,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:44.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:44.783 job1: (groupid=0, jobs=1): err= 0: pid=69988: Thu Dec 5 12:12:15 2024 00:08:44.783 read: IOPS=2563, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1004msec) 00:08:44.783 slat (usec): min=6, max=9305, avg=179.26, stdev=831.86 00:08:44.783 clat (usec): min=1872, max=37478, avg=22045.23, stdev=3481.37 00:08:44.783 lat (usec): min=4182, max=37500, avg=22224.49, stdev=3547.83 00:08:44.783 clat percentiles (usec): 00:08:44.783 | 1.00th=[15401], 5.00th=[17695], 10.00th=[18482], 20.00th=[19006], 00:08:44.783 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21627], 60.00th=[21890], 00:08:44.783 | 70.00th=[23987], 80.00th=[25035], 90.00th=[26870], 95.00th=[27395], 00:08:44.783 | 99.00th=[32375], 99.50th=[33424], 99.90th=[33817], 99.95th=[34341], 00:08:44.783 | 99.99th=[37487] 00:08:44.783 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:08:44.783 slat (usec): min=14, max=5806, avg=167.14, stdev=676.88 00:08:44.783 clat (usec): min=7034, max=42018, avg=22777.86, stdev=6872.52 00:08:44.783 lat (usec): min=7057, max=42043, avg=22944.99, stdev=6922.63 00:08:44.783 clat percentiles (usec): 00:08:44.783 | 1.00th=[12256], 5.00th=[13698], 10.00th=[14484], 20.00th=[16319], 00:08:44.783 | 30.00th=[17433], 40.00th=[19530], 50.00th=[21103], 60.00th=[24773], 00:08:44.783 | 70.00th=[27132], 80.00th=[28181], 90.00th=[32900], 95.00th=[35914], 00:08:44.783 | 99.00th=[38536], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:08:44.783 | 99.99th=[42206] 00:08:44.783 bw ( KiB/s): min=11376, max=12312, per=18.41%, avg=11844.00, stdev=661.85, samples=2 00:08:44.783 iops : min= 2844, max= 3078, avg=2961.00, stdev=165.46, samples=2 00:08:44.783 lat (msec) : 2=0.02%, 10=0.43%, 20=36.33%, 50=63.23% 00:08:44.783 cpu : usr=3.19%, sys=9.47%, ctx=348, majf=0, minf=7 00:08:44.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:08:44.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:44.783 issued rwts: total=2574,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:44.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:44.783 job2: (groupid=0, jobs=1): err= 0: pid=69990: Thu Dec 5 12:12:15 2024 00:08:44.783 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:08:44.783 slat (usec): min=7, max=3288, avg=102.99, stdev=480.26 00:08:44.783 clat (usec): min=10418, max=16112, avg=13710.62, stdev=832.62 00:08:44.783 lat (usec): min=10743, max=16127, avg=13813.62, stdev=703.90 00:08:44.783 clat percentiles (usec): 00:08:44.783 | 1.00th=[10945], 5.00th=[11469], 10.00th=[12649], 20.00th=[13566], 00:08:44.783 | 30.00th=[13698], 40.00th=[13829], 50.00th=[13829], 60.00th=[13960], 00:08:44.783 | 70.00th=[14091], 80.00th=[14222], 90.00th=[14484], 95.00th=[14615], 00:08:44.783 | 99.00th=[15008], 99.50th=[15270], 99.90th=[16057], 99.95th=[16057], 00:08:44.783 | 99.99th=[16057] 00:08:44.783 write: IOPS=4881, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1003msec); 0 zone resets 00:08:44.783 slat (usec): min=12, max=3206, avg=99.18, stdev=418.73 00:08:44.783 clat (usec): min=2208, max=16234, avg=12950.43, stdev=1767.00 00:08:44.783 lat (usec): min=2226, max=16253, avg=13049.61, stdev=1766.22 00:08:44.783 clat percentiles (usec): 00:08:44.783 | 1.00th=[ 5735], 5.00th=[11076], 10.00th=[11338], 20.00th=[11600], 00:08:44.783 | 30.00th=[11863], 40.00th=[12125], 50.00th=[13435], 60.00th=[13829], 00:08:44.783 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14746], 95.00th=[15008], 00:08:44.783 | 99.00th=[15401], 99.50th=[15533], 99.90th=[16188], 99.95th=[16188], 00:08:44.783 | 99.99th=[16188] 00:08:44.783 bw ( KiB/s): min=17920, max=20272, per=29.69%, avg=19096.00, stdev=1663.12, samples=2 00:08:44.783 iops : min= 4480, max= 5068, avg=4774.00, stdev=415.78, samples=2 00:08:44.783 lat (msec) : 4=0.37%, 10=1.50%, 20=98.13% 00:08:44.783 cpu : usr=4.89%, sys=13.67%, ctx=510, majf=0, minf=17 00:08:44.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:44.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:44.783 issued rwts: total=4608,4896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:44.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:44.783 job3: (groupid=0, jobs=1): err= 0: pid=69991: Thu Dec 5 12:12:15 2024 00:08:44.783 read: IOPS=2399, BW=9598KiB/s (9828kB/s)(9636KiB/1004msec) 00:08:44.783 slat (usec): min=5, max=12731, avg=210.40, stdev=1060.96 00:08:44.783 clat (usec): min=3724, max=41318, avg=27018.40, stdev=6024.98 00:08:44.783 lat (usec): min=3740, max=45509, avg=27228.79, stdev=5983.68 00:08:44.783 clat percentiles (usec): 00:08:44.783 | 1.00th=[10945], 5.00th=[18482], 10.00th=[21365], 20.00th=[22414], 00:08:44.783 | 30.00th=[23725], 40.00th=[24511], 50.00th=[27657], 60.00th=[28181], 00:08:44.783 | 70.00th=[28967], 80.00th=[30802], 90.00th=[35390], 95.00th=[38536], 00:08:44.783 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:44.783 | 99.99th=[41157] 00:08:44.783 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:08:44.783 slat (usec): min=16, max=5985, avg=183.01, stdev=773.40 00:08:44.783 clat (usec): min=13728, max=38293, avg=23850.29, stdev=5024.48 00:08:44.783 lat (usec): min=15691, max=41288, avg=24033.30, stdev=5014.52 00:08:44.783 clat percentiles (usec): 00:08:44.783 | 1.00th=[15664], 5.00th=[17433], 10.00th=[17957], 20.00th=[18220], 00:08:44.783 | 30.00th=[20579], 40.00th=[22938], 50.00th=[23725], 60.00th=[24773], 00:08:44.783 | 70.00th=[26084], 80.00th=[26870], 90.00th=[31589], 95.00th=[33424], 00:08:44.783 | 99.00th=[35914], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:08:44.783 | 99.99th=[38536] 00:08:44.783 bw ( KiB/s): min= 9072, max=11430, per=15.94%, avg=10251.00, stdev=1667.36, samples=2 00:08:44.783 iops : min= 2268, max= 2857, avg=2562.50, stdev=416.49, samples=2 00:08:44.783 lat (msec) : 4=0.28%, 10=0.06%, 20=16.78%, 50=82.87% 00:08:44.783 cpu : usr=3.69%, sys=7.68%, ctx=236, majf=0, minf=19 00:08:44.783 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:08:44.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:44.783 issued rwts: total=2409,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:44.783 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:44.783 00:08:44.783 Run status group 0 (all jobs): 00:08:44.783 READ: bw=57.8MiB/s (60.7MB/s), 9598KiB/s-20.6MiB/s (9828kB/s-21.6MB/s), io=58.1MiB (61.0MB), run=1003-1005msec 00:08:44.783 WRITE: bw=62.8MiB/s (65.9MB/s), 9.96MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=63.1MiB (66.2MB), run=1003-1005msec 00:08:44.783 00:08:44.783 Disk stats (read/write): 00:08:44.783 nvme0n1: ios=4658/5082, merge=0/0, ticks=15868/16098, in_queue=31966, util=87.58% 00:08:44.783 nvme0n2: ios=2341/2560, merge=0/0, ticks=16874/16987, in_queue=33861, util=89.04% 00:08:44.783 nvme0n3: ios=3977/4096, merge=0/0, ticks=12517/11735, in_queue=24252, util=89.05% 00:08:44.783 nvme0n4: ios=2048/2263, merge=0/0, ticks=13731/11696, in_queue=25427, util=89.81% 00:08:44.783 12:12:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:44.783 [global] 00:08:44.783 thread=1 00:08:44.783 invalidate=1 00:08:44.783 rw=randwrite 00:08:44.783 time_based=1 00:08:44.783 runtime=1 00:08:44.783 ioengine=libaio 00:08:44.783 direct=1 00:08:44.783 bs=4096 00:08:44.783 iodepth=128 00:08:44.783 norandommap=0 00:08:44.783 numjobs=1 00:08:44.783 00:08:44.783 verify_dump=1 00:08:44.783 verify_backlog=512 00:08:44.783 verify_state_save=0 00:08:44.783 do_verify=1 00:08:44.783 verify=crc32c-intel 00:08:44.783 [job0] 00:08:44.783 filename=/dev/nvme0n1 00:08:44.783 [job1] 00:08:44.783 filename=/dev/nvme0n2 00:08:44.783 [job2] 00:08:44.783 filename=/dev/nvme0n3 00:08:44.783 [job3] 00:08:44.783 filename=/dev/nvme0n4 00:08:45.042 Could not set queue depth (nvme0n1) 00:08:45.042 Could not set queue depth (nvme0n2) 00:08:45.042 Could not set queue depth (nvme0n3) 00:08:45.042 Could not set queue depth (nvme0n4) 00:08:45.042 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.042 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.042 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.042 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.042 fio-3.35 00:08:45.042 Starting 4 threads 00:08:46.421 00:08:46.421 job0: (groupid=0, jobs=1): err= 0: pid=70044: Thu Dec 5 12:12:17 2024 00:08:46.421 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:08:46.421 slat (usec): min=3, max=9893, avg=193.40, stdev=927.75 00:08:46.421 clat (usec): min=15069, max=36126, avg=24113.15, stdev=3111.82 00:08:46.421 lat (usec): min=15091, max=36162, avg=24306.55, stdev=3197.89 00:08:46.421 clat percentiles (usec): 00:08:46.421 | 1.00th=[16712], 5.00th=[19006], 10.00th=[21365], 20.00th=[22152], 00:08:46.421 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23725], 60.00th=[24249], 00:08:46.421 | 70.00th=[24773], 80.00th=[26346], 90.00th=[28705], 95.00th=[29754], 00:08:46.421 | 99.00th=[32900], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:08:46.421 | 99.99th=[35914] 00:08:46.421 write: IOPS=2838, BW=11.1MiB/s (11.6MB/s)(11.2MiB/1010msec); 0 zone resets 00:08:46.421 slat (usec): min=5, max=10056, avg=168.71, stdev=832.16 00:08:46.421 clat (usec): min=9433, max=35082, avg=23030.71, stdev=3579.39 00:08:46.421 lat (usec): min=10162, max=35088, avg=23199.42, stdev=3654.45 00:08:46.421 clat percentiles (usec): 00:08:46.421 | 1.00th=[11338], 5.00th=[16909], 10.00th=[19006], 20.00th=[20579], 00:08:46.421 | 30.00th=[21627], 40.00th=[22414], 50.00th=[23200], 60.00th=[24511], 00:08:46.421 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26346], 95.00th=[27395], 00:08:46.421 | 99.00th=[32113], 99.50th=[33424], 99.90th=[34866], 99.95th=[34866], 00:08:46.421 | 99.99th=[34866] 00:08:46.421 bw ( KiB/s): min= 9632, max=12312, per=16.89%, avg=10972.00, stdev=1895.05, samples=2 00:08:46.421 iops : min= 2408, max= 3078, avg=2743.00, stdev=473.76, samples=2 00:08:46.421 lat (msec) : 10=0.02%, 20=12.38%, 50=87.60% 00:08:46.421 cpu : usr=2.68%, sys=7.43%, ctx=763, majf=0, minf=3 00:08:46.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:46.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.421 issued rwts: total=2560,2867,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.421 job1: (groupid=0, jobs=1): err= 0: pid=70045: Thu Dec 5 12:12:17 2024 00:08:46.421 read: IOPS=5301, BW=20.7MiB/s (21.7MB/s)(20.9MiB/1008msec) 00:08:46.421 slat (usec): min=4, max=10360, avg=97.73, stdev=607.22 00:08:46.421 clat (usec): min=4628, max=22532, avg=12377.95, stdev=3308.19 00:08:46.421 lat (usec): min=4643, max=22548, avg=12475.68, stdev=3336.43 00:08:46.421 clat percentiles (usec): 00:08:46.421 | 1.00th=[ 5211], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9503], 00:08:46.421 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11469], 60.00th=[11863], 00:08:46.421 | 70.00th=[13173], 80.00th=[14877], 90.00th=[17433], 95.00th=[19530], 00:08:46.421 | 99.00th=[21103], 99.50th=[21627], 99.90th=[21890], 99.95th=[22414], 00:08:46.421 | 99.99th=[22414] 00:08:46.421 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:08:46.421 slat (usec): min=5, max=8061, avg=77.33, stdev=277.79 00:08:46.421 clat (usec): min=3685, max=22396, avg=10919.60, stdev=2354.55 00:08:46.421 lat (usec): min=3714, max=22404, avg=10996.93, stdev=2372.46 00:08:46.421 clat percentiles (usec): 00:08:46.421 | 1.00th=[ 4621], 5.00th=[ 5669], 10.00th=[ 6652], 20.00th=[ 9372], 00:08:46.421 | 30.00th=[10814], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:08:46.421 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12780], 00:08:46.421 | 99.00th=[13042], 99.50th=[16057], 99.90th=[21627], 99.95th=[21890], 00:08:46.421 | 99.99th=[22414] 00:08:46.421 bw ( KiB/s): min=21712, max=23344, per=34.67%, avg=22528.00, stdev=1154.00, samples=2 00:08:46.421 iops : min= 5428, max= 5836, avg=5632.00, stdev=288.50, samples=2 00:08:46.421 lat (msec) : 4=0.18%, 10=23.88%, 20=73.88%, 50=2.06% 00:08:46.421 cpu : usr=4.87%, sys=13.01%, ctx=830, majf=0, minf=9 00:08:46.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:46.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.421 issued rwts: total=5344,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.421 job2: (groupid=0, jobs=1): err= 0: pid=70046: Thu Dec 5 12:12:17 2024 00:08:46.421 read: IOPS=4729, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1002msec) 00:08:46.421 slat (usec): min=4, max=12018, avg=110.46, stdev=711.85 00:08:46.421 clat (usec): min=1688, max=26175, avg=13858.44, stdev=3590.86 00:08:46.421 lat (usec): min=1734, max=26181, avg=13968.90, stdev=3617.70 00:08:46.421 clat percentiles (usec): 00:08:46.421 | 1.00th=[ 5538], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[11338], 00:08:46.421 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12780], 60.00th=[13304], 00:08:46.421 | 70.00th=[14746], 80.00th=[16319], 90.00th=[19268], 95.00th=[21890], 00:08:46.421 | 99.00th=[23987], 99.50th=[24511], 99.90th=[25297], 99.95th=[25297], 00:08:46.421 | 99.99th=[26084] 00:08:46.421 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:08:46.421 slat (usec): min=5, max=9587, avg=85.56, stdev=400.54 00:08:46.421 clat (usec): min=3581, max=24932, avg=11974.94, stdev=2516.70 00:08:46.421 lat (usec): min=3680, max=24942, avg=12060.50, stdev=2549.25 00:08:46.421 clat percentiles (usec): 00:08:46.421 | 1.00th=[ 4752], 5.00th=[ 6194], 10.00th=[ 7570], 20.00th=[10552], 00:08:46.421 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13042], 60.00th=[13304], 00:08:46.421 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13698], 95.00th=[13829], 00:08:46.422 | 99.00th=[14615], 99.50th=[14877], 99.90th=[23987], 99.95th=[24249], 00:08:46.422 | 99.99th=[25035] 00:08:46.422 bw ( KiB/s): min=20480, max=20521, per=31.55%, avg=20500.50, stdev=28.99, samples=2 00:08:46.422 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:08:46.422 lat (msec) : 2=0.06%, 4=0.07%, 10=13.52%, 20=82.08%, 50=4.27% 00:08:46.422 cpu : usr=5.00%, sys=11.79%, ctx=720, majf=0, minf=8 00:08:46.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:46.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.422 issued rwts: total=4739,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.422 job3: (groupid=0, jobs=1): err= 0: pid=70047: Thu Dec 5 12:12:17 2024 00:08:46.422 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:08:46.422 slat (usec): min=3, max=10880, avg=190.18, stdev=909.77 00:08:46.422 clat (usec): min=16344, max=36156, avg=23901.76, stdev=2806.35 00:08:46.422 lat (usec): min=16357, max=40230, avg=24091.94, stdev=2900.14 00:08:46.422 clat percentiles (usec): 00:08:46.422 | 1.00th=[17957], 5.00th=[19792], 10.00th=[20841], 20.00th=[21890], 00:08:46.422 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23725], 60.00th=[23987], 00:08:46.422 | 70.00th=[24249], 80.00th=[25297], 90.00th=[28443], 95.00th=[29492], 00:08:46.422 | 99.00th=[31065], 99.50th=[33162], 99.90th=[35914], 99.95th=[35914], 00:08:46.422 | 99.99th=[35914] 00:08:46.422 write: IOPS=2768, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1007msec); 0 zone resets 00:08:46.422 slat (usec): min=5, max=12631, avg=176.67, stdev=861.17 00:08:46.422 clat (usec): min=5643, max=33960, avg=23478.77, stdev=3338.21 00:08:46.422 lat (usec): min=6330, max=34144, avg=23655.45, stdev=3404.74 00:08:46.422 clat percentiles (usec): 00:08:46.422 | 1.00th=[14353], 5.00th=[17957], 10.00th=[19792], 20.00th=[21103], 00:08:46.422 | 30.00th=[21890], 40.00th=[22938], 50.00th=[23725], 60.00th=[24773], 00:08:46.422 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26608], 95.00th=[28443], 00:08:46.422 | 99.00th=[32637], 99.50th=[33162], 99.90th=[33817], 99.95th=[33817], 00:08:46.422 | 99.99th=[33817] 00:08:46.422 bw ( KiB/s): min= 9128, max=12160, per=16.38%, avg=10644.00, stdev=2143.95, samples=2 00:08:46.422 iops : min= 2282, max= 3040, avg=2661.00, stdev=535.99, samples=2 00:08:46.422 lat (msec) : 10=0.17%, 20=8.13%, 50=91.70% 00:08:46.422 cpu : usr=2.58%, sys=7.26%, ctx=730, majf=0, minf=19 00:08:46.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:46.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.422 issued rwts: total=2560,2788,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.422 00:08:46.422 Run status group 0 (all jobs): 00:08:46.422 READ: bw=58.8MiB/s (61.7MB/s), 9.90MiB/s-20.7MiB/s (10.4MB/s-21.7MB/s), io=59.4MiB (62.3MB), run=1002-1010msec 00:08:46.422 WRITE: bw=63.5MiB/s (66.5MB/s), 10.8MiB/s-21.8MiB/s (11.3MB/s-22.9MB/s), io=64.1MiB (67.2MB), run=1002-1010msec 00:08:46.422 00:08:46.422 Disk stats (read/write): 00:08:46.422 nvme0n1: ios=2135/2560, merge=0/0, ticks=24095/26818, in_queue=50913, util=87.58% 00:08:46.422 nvme0n2: ios=4656/4807, merge=0/0, ticks=52742/50578, in_queue=103320, util=89.07% 00:08:46.422 nvme0n3: ios=4102/4431, merge=0/0, ticks=52347/50781, in_queue=103128, util=89.38% 00:08:46.422 nvme0n4: ios=2048/2509, merge=0/0, ticks=23205/27116, in_queue=50321, util=88.17% 00:08:46.422 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:46.422 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70060 00:08:46.422 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:46.422 12:12:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:46.422 [global] 00:08:46.422 thread=1 00:08:46.422 invalidate=1 00:08:46.422 rw=read 00:08:46.422 time_based=1 00:08:46.422 runtime=10 00:08:46.422 ioengine=libaio 00:08:46.422 direct=1 00:08:46.422 bs=4096 00:08:46.422 iodepth=1 00:08:46.422 norandommap=1 00:08:46.422 numjobs=1 00:08:46.422 00:08:46.422 [job0] 00:08:46.422 filename=/dev/nvme0n1 00:08:46.422 [job1] 00:08:46.422 filename=/dev/nvme0n2 00:08:46.422 [job2] 00:08:46.422 filename=/dev/nvme0n3 00:08:46.422 [job3] 00:08:46.422 filename=/dev/nvme0n4 00:08:46.422 Could not set queue depth (nvme0n1) 00:08:46.422 Could not set queue depth (nvme0n2) 00:08:46.422 Could not set queue depth (nvme0n3) 00:08:46.422 Could not set queue depth (nvme0n4) 00:08:46.422 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.422 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.422 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.422 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.422 fio-3.35 00:08:46.422 Starting 4 threads 00:08:49.708 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:49.708 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=64937984, buflen=4096 00:08:49.708 fio: pid=70109, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:49.708 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:49.965 fio: pid=70108, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:49.965 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=70811648, buflen=4096 00:08:49.965 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:49.965 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:50.222 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=30699520, buflen=4096 00:08:50.222 fio: pid=70106, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:50.222 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:50.222 12:12:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:50.479 fio: pid=70107, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:50.479 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=58572800, buflen=4096 00:08:50.479 00:08:50.479 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70106: Thu Dec 5 12:12:21 2024 00:08:50.479 read: IOPS=2155, BW=8620KiB/s (8827kB/s)(29.3MiB/3478msec) 00:08:50.479 slat (usec): min=9, max=19092, avg=32.34, stdev=338.57 00:08:50.479 clat (usec): min=48, max=3988, avg=429.26, stdev=124.92 00:08:50.479 lat (usec): min=162, max=19436, avg=461.61, stdev=360.32 00:08:50.479 clat percentiles (usec): 00:08:50.479 | 1.00th=[ 167], 5.00th=[ 202], 10.00th=[ 258], 20.00th=[ 302], 00:08:50.479 | 30.00th=[ 453], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 482], 00:08:50.479 | 70.00th=[ 486], 80.00th=[ 494], 90.00th=[ 502], 95.00th=[ 510], 00:08:50.479 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 1057], 99.95th=[ 2802], 00:08:50.479 | 99.99th=[ 3982] 00:08:50.479 bw ( KiB/s): min= 7776, max= 8216, per=13.66%, avg=7985.33, stdev=166.54, samples=6 00:08:50.479 iops : min= 1944, max= 2054, avg=1996.33, stdev=41.63, samples=6 00:08:50.479 lat (usec) : 50=0.01%, 250=8.83%, 500=78.26%, 750=12.67%, 1000=0.09% 00:08:50.479 lat (msec) : 2=0.07%, 4=0.05% 00:08:50.479 cpu : usr=1.01%, sys=4.60%, ctx=7510, majf=0, minf=1 00:08:50.479 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.479 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.479 issued rwts: total=7496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.479 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.479 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70107: Thu Dec 5 12:12:21 2024 00:08:50.479 read: IOPS=3804, BW=14.9MiB/s (15.6MB/s)(55.9MiB/3759msec) 00:08:50.479 slat (usec): min=10, max=14491, avg=20.80, stdev=230.17 00:08:50.479 clat (usec): min=105, max=7307, avg=240.59, stdev=149.10 00:08:50.479 lat (usec): min=164, max=14657, avg=261.39, stdev=274.29 00:08:50.479 clat percentiles (usec): 00:08:50.479 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 182], 20.00th=[ 190], 00:08:50.479 | 30.00th=[ 198], 40.00th=[ 208], 50.00th=[ 227], 60.00th=[ 265], 00:08:50.479 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:08:50.479 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 2114], 99.95th=[ 3982], 00:08:50.479 | 99.99th=[ 6063] 00:08:50.479 bw ( KiB/s): min=13772, max=15672, per=25.78%, avg=15072.57, stdev=669.23, samples=7 00:08:50.479 iops : min= 3443, max= 3918, avg=3768.14, stdev=167.31, samples=7 00:08:50.479 lat (usec) : 250=52.81%, 500=46.91%, 750=0.11%, 1000=0.03% 00:08:50.479 lat (msec) : 2=0.02%, 4=0.06%, 10=0.05% 00:08:50.479 cpu : usr=0.82%, sys=5.35%, ctx=14320, majf=0, minf=2 00:08:50.479 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.479 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.479 issued rwts: total=14301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.479 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.479 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70108: Thu Dec 5 12:12:21 2024 00:08:50.479 read: IOPS=5354, BW=20.9MiB/s (21.9MB/s)(67.5MiB/3229msec) 00:08:50.479 slat (usec): min=11, max=12866, avg=15.64, stdev=124.33 00:08:50.479 clat (usec): min=138, max=1806, avg=169.96, stdev=21.77 00:08:50.479 lat (usec): min=150, max=13062, avg=185.61, stdev=126.41 00:08:50.479 clat percentiles (usec): 00:08:50.479 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:08:50.479 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:08:50.479 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:08:50.479 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 293], 99.95th=[ 392], 00:08:50.479 | 99.99th=[ 1287] 00:08:50.479 bw ( KiB/s): min=21216, max=21760, per=36.99%, avg=21622.67, stdev=214.99, samples=6 00:08:50.479 iops : min= 5304, max= 5440, avg=5405.67, stdev=53.75, samples=6 00:08:50.479 lat (usec) : 250=99.83%, 500=0.14%, 750=0.01%, 1000=0.01% 00:08:50.479 lat (msec) : 2=0.01% 00:08:50.479 cpu : usr=1.18%, sys=6.38%, ctx=17291, majf=0, minf=2 00:08:50.479 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.479 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.479 issued rwts: total=17289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.479 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.480 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70109: Thu Dec 5 12:12:21 2024 00:08:50.480 read: IOPS=5361, BW=20.9MiB/s (22.0MB/s)(61.9MiB/2957msec) 00:08:50.480 slat (nsec): min=12643, max=76106, avg=14861.74, stdev=3649.15 00:08:50.480 clat (usec): min=144, max=2131, avg=170.42, stdev=29.87 00:08:50.480 lat (usec): min=157, max=2144, avg=185.28, stdev=30.16 00:08:50.480 clat percentiles (usec): 00:08:50.480 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:08:50.480 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:08:50.480 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:08:50.480 | 99.00th=[ 206], 99.50th=[ 260], 99.90th=[ 441], 99.95th=[ 627], 00:08:50.480 | 99.99th=[ 1729] 00:08:50.480 bw ( KiB/s): min=21120, max=21712, per=36.83%, avg=21531.20, stdev=242.76, samples=5 00:08:50.480 iops : min= 5280, max= 5428, avg=5382.80, stdev=60.69, samples=5 00:08:50.480 lat (usec) : 250=99.46%, 500=0.45%, 750=0.04%, 1000=0.01% 00:08:50.480 lat (msec) : 2=0.02%, 4=0.01% 00:08:50.480 cpu : usr=1.42%, sys=6.39%, ctx=15856, majf=0, minf=2 00:08:50.480 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.480 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.480 issued rwts: total=15855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.480 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.480 00:08:50.480 Run status group 0 (all jobs): 00:08:50.480 READ: bw=57.1MiB/s (59.9MB/s), 8620KiB/s-20.9MiB/s (8827kB/s-22.0MB/s), io=215MiB (225MB), run=2957-3759msec 00:08:50.480 00:08:50.480 Disk stats (read/write): 00:08:50.480 nvme0n1: ios=7123/0, merge=0/0, ticks=3105/0, in_queue=3105, util=94.74% 00:08:50.480 nvme0n2: ios=13588/0, merge=0/0, ticks=3354/0, in_queue=3354, util=94.73% 00:08:50.480 nvme0n3: ios=16712/0, merge=0/0, ticks=2914/0, in_queue=2914, util=96.18% 00:08:50.480 nvme0n4: ios=15396/0, merge=0/0, ticks=2709/0, in_queue=2709, util=96.77% 00:08:50.480 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:50.480 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:50.737 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:50.737 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:50.994 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:50.994 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:51.251 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:51.251 12:12:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:51.535 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:51.535 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70060 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:51.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:51.793 nvmf hotplug test: fio failed as expected 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:51.793 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:52.050 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:52.050 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:52.050 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:52.050 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:52.050 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:52.050 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:52.050 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:52.050 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:52.050 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:52.050 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:52.050 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:52.309 rmmod nvme_tcp 00:08:52.309 rmmod nvme_fabrics 00:08:52.309 rmmod nvme_keyring 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 69578 ']' 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 69578 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 69578 ']' 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 69578 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69578 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.309 killing process with pid 69578 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69578' 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 69578 00:08:52.309 12:12:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 69578 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:08:52.567 00:08:52.567 real 0m19.560s 00:08:52.567 user 1m13.160s 00:08:52.567 sys 0m9.736s 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.567 ************************************ 00:08:52.567 END TEST nvmf_fio_target 00:08:52.567 ************************************ 00:08:52.567 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.825 ************************************ 00:08:52.825 START TEST nvmf_bdevio 00:08:52.825 ************************************ 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:52.825 * Looking for test storage... 00:08:52.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:52.825 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.826 --rc genhtml_branch_coverage=1 00:08:52.826 --rc genhtml_function_coverage=1 00:08:52.826 --rc genhtml_legend=1 00:08:52.826 --rc geninfo_all_blocks=1 00:08:52.826 --rc geninfo_unexecuted_blocks=1 00:08:52.826 00:08:52.826 ' 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.826 --rc genhtml_branch_coverage=1 00:08:52.826 --rc genhtml_function_coverage=1 00:08:52.826 --rc genhtml_legend=1 00:08:52.826 --rc geninfo_all_blocks=1 00:08:52.826 --rc geninfo_unexecuted_blocks=1 00:08:52.826 00:08:52.826 ' 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.826 --rc genhtml_branch_coverage=1 00:08:52.826 --rc genhtml_function_coverage=1 00:08:52.826 --rc genhtml_legend=1 00:08:52.826 --rc geninfo_all_blocks=1 00:08:52.826 --rc geninfo_unexecuted_blocks=1 00:08:52.826 00:08:52.826 ' 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.826 --rc genhtml_branch_coverage=1 00:08:52.826 --rc genhtml_function_coverage=1 00:08:52.826 --rc genhtml_legend=1 00:08:52.826 --rc geninfo_all_blocks=1 00:08:52.826 --rc geninfo_unexecuted_blocks=1 00:08:52.826 00:08:52.826 ' 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.826 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:52.826 Cannot find device "nvmf_init_br" 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:52.826 Cannot find device "nvmf_init_br2" 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:52.826 Cannot find device "nvmf_tgt_br" 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:08:52.826 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.083 Cannot find device "nvmf_tgt_br2" 00:08:53.083 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:08:53.083 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:53.083 Cannot find device "nvmf_init_br" 00:08:53.083 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:08:53.083 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:53.083 Cannot find device "nvmf_init_br2" 00:08:53.083 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:08:53.083 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:53.084 Cannot find device "nvmf_tgt_br" 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:53.084 Cannot find device "nvmf_tgt_br2" 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:53.084 Cannot find device "nvmf_br" 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:53.084 Cannot find device "nvmf_init_if" 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:53.084 Cannot find device "nvmf_init_if2" 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:53.084 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:53.341 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:53.341 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:08:53.341 00:08:53.341 --- 10.0.0.3 ping statistics --- 00:08:53.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.341 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:53.341 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:53.341 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:08:53.341 00:08:53.341 --- 10.0.0.4 ping statistics --- 00:08:53.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.341 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:53.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:53.341 00:08:53.341 --- 10.0.0.1 ping statistics --- 00:08:53.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.341 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:53.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:08:53.341 00:08:53.341 --- 10.0.0.2 ping statistics --- 00:08:53.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.341 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.341 12:12:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=70481 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 70481 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 70481 ']' 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.341 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:53.341 [2024-12-05 12:12:24.078345] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:53.341 [2024-12-05 12:12:24.078434] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.600 [2024-12-05 12:12:24.223415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.600 [2024-12-05 12:12:24.267188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.600 [2024-12-05 12:12:24.267252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.600 [2024-12-05 12:12:24.267263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.600 [2024-12-05 12:12:24.267271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.600 [2024-12-05 12:12:24.267287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.600 [2024-12-05 12:12:24.268789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:53.600 [2024-12-05 12:12:24.268936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:53.600 [2024-12-05 12:12:24.269059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.600 [2024-12-05 12:12:24.269058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:53.600 [2024-12-05 12:12:24.442910] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.600 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:53.859 Malloc0 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:53.859 [2024-12-05 12:12:24.505533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.859 { 00:08:53.859 "params": { 00:08:53.859 "name": "Nvme$subsystem", 00:08:53.859 "trtype": "$TEST_TRANSPORT", 00:08:53.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.859 "adrfam": "ipv4", 00:08:53.859 "trsvcid": "$NVMF_PORT", 00:08:53.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.859 "hdgst": ${hdgst:-false}, 00:08:53.859 "ddgst": ${ddgst:-false} 00:08:53.859 }, 00:08:53.859 "method": "bdev_nvme_attach_controller" 00:08:53.859 } 00:08:53.859 EOF 00:08:53.859 )") 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:08:53.859 12:12:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.859 "params": { 00:08:53.859 "name": "Nvme1", 00:08:53.859 "trtype": "tcp", 00:08:53.859 "traddr": "10.0.0.3", 00:08:53.859 "adrfam": "ipv4", 00:08:53.859 "trsvcid": "4420", 00:08:53.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.859 "hdgst": false, 00:08:53.859 "ddgst": false 00:08:53.859 }, 00:08:53.859 "method": "bdev_nvme_attach_controller" 00:08:53.859 }' 00:08:53.859 [2024-12-05 12:12:24.570821] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:08:53.859 [2024-12-05 12:12:24.570905] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70527 ] 00:08:53.859 [2024-12-05 12:12:24.725254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.118 [2024-12-05 12:12:24.779768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.118 [2024-12-05 12:12:24.779892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.118 [2024-12-05 12:12:24.779900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.118 I/O targets: 00:08:54.118 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:54.118 00:08:54.118 00:08:54.118 CUnit - A unit testing framework for C - Version 2.1-3 00:08:54.118 http://cunit.sourceforge.net/ 00:08:54.118 00:08:54.118 00:08:54.118 Suite: bdevio tests on: Nvme1n1 00:08:54.376 Test: blockdev write read block ...passed 00:08:54.376 Test: blockdev write zeroes read block ...passed 00:08:54.376 Test: blockdev write zeroes read no split ...passed 00:08:54.376 Test: blockdev write zeroes read split ...passed 00:08:54.376 Test: blockdev write zeroes read split partial ...passed 00:08:54.376 Test: blockdev reset ...[2024-12-05 12:12:25.077891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:08:54.376 [2024-12-05 12:12:25.077996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0cf50 (9): Bad file descriptor 00:08:54.376 [2024-12-05 12:12:25.091854] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:08:54.376 passed 00:08:54.376 Test: blockdev write read 8 blocks ...passed 00:08:54.376 Test: blockdev write read size > 128k ...passed 00:08:54.376 Test: blockdev write read invalid size ...passed 00:08:54.376 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:54.376 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:54.376 Test: blockdev write read max offset ...passed 00:08:54.376 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:54.376 Test: blockdev writev readv 8 blocks ...passed 00:08:54.376 Test: blockdev writev readv 30 x 1block ...passed 00:08:54.634 Test: blockdev writev readv block ...passed 00:08:54.634 Test: blockdev writev readv size > 128k ...passed 00:08:54.634 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:54.634 Test: blockdev comparev and writev ...[2024-12-05 12:12:25.262599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:54.634 [2024-12-05 12:12:25.262764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:54.634 [2024-12-05 12:12:25.262903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:54.634 [2024-12-05 12:12:25.263027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:54.634 [2024-12-05 12:12:25.263470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:54.634 [2024-12-05 12:12:25.263582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:54.634 [2024-12-05 12:12:25.263710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:54.634 [2024-12-05 12:12:25.263821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:54.634 [2024-12-05 12:12:25.264198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:54.634 [2024-12-05 12:12:25.264353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:54.634 [2024-12-05 12:12:25.264462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:54.634 [2024-12-05 12:12:25.264568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:54.634 [2024-12-05 12:12:25.264966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:54.634 [2024-12-05 12:12:25.265064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:54.634 [2024-12-05 12:12:25.265173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:54.634 [2024-12-05 12:12:25.265257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:54.634 passed 00:08:54.634 Test: blockdev nvme passthru rw ...passed 00:08:54.634 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:12:25.347672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:54.634 [2024-12-05 12:12:25.347821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:54.634 [2024-12-05 12:12:25.348065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:54.634 [2024-12-05 12:12:25.348167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:54.634 [2024-12-05 12:12:25.348410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:54.634 [2024-12-05 12:12:25.348496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:54.634 [2024-12-05 12:12:25.348697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:54.634 [2024-12-05 12:12:25.348787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:54.634 passed 00:08:54.634 Test: blockdev nvme admin passthru ...passed 00:08:54.634 Test: blockdev copy ...passed 00:08:54.634 00:08:54.634 Run Summary: Type Total Ran Passed Failed Inactive 00:08:54.634 suites 1 1 n/a 0 0 00:08:54.634 tests 23 23 23 0 0 00:08:54.634 asserts 152 152 152 0 n/a 00:08:54.634 00:08:54.634 Elapsed time = 0.888 seconds 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.892 rmmod nvme_tcp 00:08:54.892 rmmod nvme_fabrics 00:08:54.892 rmmod nvme_keyring 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 70481 ']' 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 70481 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 70481 ']' 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 70481 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70481 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:08:54.892 killing process with pid 70481 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70481' 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 70481 00:08:54.892 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 70481 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:55.150 12:12:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:55.150 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:55.150 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:55.150 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:55.150 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:08:55.407 00:08:55.407 real 0m2.691s 00:08:55.407 user 0m8.634s 00:08:55.407 sys 0m0.836s 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:55.407 ************************************ 00:08:55.407 END TEST nvmf_bdevio 00:08:55.407 ************************************ 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:55.407 00:08:55.407 real 3m31.782s 00:08:55.407 user 11m5.359s 00:08:55.407 sys 1m4.541s 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.407 12:12:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:55.407 ************************************ 00:08:55.407 END TEST nvmf_target_core 00:08:55.407 ************************************ 00:08:55.408 12:12:26 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:55.408 12:12:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.408 12:12:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.408 12:12:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:55.408 ************************************ 00:08:55.408 START TEST nvmf_target_extra 00:08:55.408 ************************************ 00:08:55.408 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:55.666 * Looking for test storage... 00:08:55.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.666 --rc genhtml_branch_coverage=1 00:08:55.666 --rc genhtml_function_coverage=1 00:08:55.666 --rc genhtml_legend=1 00:08:55.666 --rc geninfo_all_blocks=1 00:08:55.666 --rc geninfo_unexecuted_blocks=1 00:08:55.666 00:08:55.666 ' 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.666 --rc genhtml_branch_coverage=1 00:08:55.666 --rc genhtml_function_coverage=1 00:08:55.666 --rc genhtml_legend=1 00:08:55.666 --rc geninfo_all_blocks=1 00:08:55.666 --rc geninfo_unexecuted_blocks=1 00:08:55.666 00:08:55.666 ' 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.666 --rc genhtml_branch_coverage=1 00:08:55.666 --rc genhtml_function_coverage=1 00:08:55.666 --rc genhtml_legend=1 00:08:55.666 --rc geninfo_all_blocks=1 00:08:55.666 --rc geninfo_unexecuted_blocks=1 00:08:55.666 00:08:55.666 ' 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.666 --rc genhtml_branch_coverage=1 00:08:55.666 --rc genhtml_function_coverage=1 00:08:55.666 --rc genhtml_legend=1 00:08:55.666 --rc geninfo_all_blocks=1 00:08:55.666 --rc geninfo_unexecuted_blocks=1 00:08:55.666 00:08:55.666 ' 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.666 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.667 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:55.667 ************************************ 00:08:55.667 START TEST nvmf_example 00:08:55.667 ************************************ 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:55.667 * Looking for test storage... 00:08:55.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.667 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.928 --rc genhtml_branch_coverage=1 00:08:55.928 --rc genhtml_function_coverage=1 00:08:55.928 --rc genhtml_legend=1 00:08:55.928 --rc geninfo_all_blocks=1 00:08:55.928 --rc geninfo_unexecuted_blocks=1 00:08:55.928 00:08:55.928 ' 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.928 --rc genhtml_branch_coverage=1 00:08:55.928 --rc genhtml_function_coverage=1 00:08:55.928 --rc genhtml_legend=1 00:08:55.928 --rc geninfo_all_blocks=1 00:08:55.928 --rc geninfo_unexecuted_blocks=1 00:08:55.928 00:08:55.928 ' 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.928 --rc genhtml_branch_coverage=1 00:08:55.928 --rc genhtml_function_coverage=1 00:08:55.928 --rc genhtml_legend=1 00:08:55.928 --rc geninfo_all_blocks=1 00:08:55.928 --rc geninfo_unexecuted_blocks=1 00:08:55.928 00:08:55.928 ' 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.928 --rc genhtml_branch_coverage=1 00:08:55.928 --rc genhtml_function_coverage=1 00:08:55.928 --rc genhtml_legend=1 00:08:55.928 --rc geninfo_all_blocks=1 00:08:55.928 --rc geninfo_unexecuted_blocks=1 00:08:55.928 00:08:55.928 ' 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.928 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.929 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:55.929 Cannot find device "nvmf_init_br" 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:55.929 Cannot find device "nvmf_init_br2" 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:55.929 Cannot find device "nvmf_tgt_br" 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:55.929 Cannot find device "nvmf_tgt_br2" 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:55.929 Cannot find device "nvmf_init_br" 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:55.929 Cannot find device "nvmf_init_br2" 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:55.929 Cannot find device "nvmf_tgt_br" 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:55.929 Cannot find device "nvmf_tgt_br2" 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:55.929 Cannot find device "nvmf_br" 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:55.929 Cannot find device "nvmf_init_if" 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:08:55.929 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:55.929 Cannot find device "nvmf_init_if2" 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:56.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:56.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:56.209 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:56.209 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:56.209 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:56.209 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:56.209 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:56.209 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:08:56.209 00:08:56.209 --- 10.0.0.3 ping statistics --- 00:08:56.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.209 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:56.209 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:56.209 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:56.209 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:08:56.209 00:08:56.209 --- 10.0.0.4 ping statistics --- 00:08:56.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.209 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:56.209 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:56.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:56.209 00:08:56.209 --- 10.0.0.1 ping statistics --- 00:08:56.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.209 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:56.209 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:56.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:08:56.209 00:08:56.209 --- 10.0.0.2 ping statistics --- 00:08:56.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.209 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:56.209 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.209 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:08:56.209 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:56.209 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.209 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:56.209 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=70813 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 70813 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 70813 ']' 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.210 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.593 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:08:57.594 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:09.808 Initializing NVMe Controllers 00:09:09.808 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:09.808 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:09.808 Initialization complete. Launching workers. 00:09:09.808 ======================================================== 00:09:09.808 Latency(us) 00:09:09.808 Device Information : IOPS MiB/s Average min max 00:09:09.808 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16852.50 65.83 3799.93 692.55 21892.64 00:09:09.808 ======================================================== 00:09:09.808 Total : 16852.50 65.83 3799.93 692.55 21892.64 00:09:09.808 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.808 rmmod nvme_tcp 00:09:09.808 rmmod nvme_fabrics 00:09:09.808 rmmod nvme_keyring 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 70813 ']' 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 70813 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 70813 ']' 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 70813 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70813 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:09.808 killing process with pid 70813 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70813' 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 70813 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 70813 00:09:09.808 nvmf threads initialize successfully 00:09:09.808 bdev subsystem init successfully 00:09:09.808 created a nvmf target service 00:09:09.808 create targets's poll groups done 00:09:09.808 all subsystems of target started 00:09:09.808 nvmf target is running 00:09:09.808 all subsystems of target stopped 00:09:09.808 destroy targets's poll groups done 00:09:09.808 destroyed the nvmf target service 00:09:09.808 bdev subsystem finish successfully 00:09:09.808 nvmf threads destroy successfully 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:09.808 12:12:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:09.808 00:09:09.808 real 0m12.753s 00:09:09.808 user 0m45.025s 00:09:09.808 sys 0m2.124s 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:09.808 ************************************ 00:09:09.808 END TEST nvmf_example 00:09:09.808 ************************************ 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:09.808 ************************************ 00:09:09.808 START TEST nvmf_filesystem 00:09:09.808 ************************************ 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:09.808 * Looking for test storage... 00:09:09.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.808 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.809 --rc genhtml_branch_coverage=1 00:09:09.809 --rc genhtml_function_coverage=1 00:09:09.809 --rc genhtml_legend=1 00:09:09.809 --rc geninfo_all_blocks=1 00:09:09.809 --rc geninfo_unexecuted_blocks=1 00:09:09.809 00:09:09.809 ' 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.809 --rc genhtml_branch_coverage=1 00:09:09.809 --rc genhtml_function_coverage=1 00:09:09.809 --rc genhtml_legend=1 00:09:09.809 --rc geninfo_all_blocks=1 00:09:09.809 --rc geninfo_unexecuted_blocks=1 00:09:09.809 00:09:09.809 ' 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.809 --rc genhtml_branch_coverage=1 00:09:09.809 --rc genhtml_function_coverage=1 00:09:09.809 --rc genhtml_legend=1 00:09:09.809 --rc geninfo_all_blocks=1 00:09:09.809 --rc geninfo_unexecuted_blocks=1 00:09:09.809 00:09:09.809 ' 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.809 --rc genhtml_branch_coverage=1 00:09:09.809 --rc genhtml_function_coverage=1 00:09:09.809 --rc genhtml_legend=1 00:09:09.809 --rc geninfo_all_blocks=1 00:09:09.809 --rc geninfo_unexecuted_blocks=1 00:09:09.809 00:09:09.809 ' 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:09.809 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:09.810 #define SPDK_CONFIG_H 00:09:09.810 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:09.810 #define SPDK_CONFIG_APPS 1 00:09:09.810 #define SPDK_CONFIG_ARCH native 00:09:09.810 #undef SPDK_CONFIG_ASAN 00:09:09.810 #define SPDK_CONFIG_AVAHI 1 00:09:09.810 #undef SPDK_CONFIG_CET 00:09:09.810 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:09.810 #define SPDK_CONFIG_COVERAGE 1 00:09:09.810 #define SPDK_CONFIG_CROSS_PREFIX 00:09:09.810 #undef SPDK_CONFIG_CRYPTO 00:09:09.810 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:09.810 #undef SPDK_CONFIG_CUSTOMOCF 00:09:09.810 #undef SPDK_CONFIG_DAOS 00:09:09.810 #define SPDK_CONFIG_DAOS_DIR 00:09:09.810 #define SPDK_CONFIG_DEBUG 1 00:09:09.810 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:09.810 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:09.810 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:09.810 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:09.810 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:09.810 #undef SPDK_CONFIG_DPDK_UADK 00:09:09.810 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:09.810 #define SPDK_CONFIG_EXAMPLES 1 00:09:09.810 #undef SPDK_CONFIG_FC 00:09:09.810 #define SPDK_CONFIG_FC_PATH 00:09:09.810 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:09.810 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:09.810 #define SPDK_CONFIG_FSDEV 1 00:09:09.810 #undef SPDK_CONFIG_FUSE 00:09:09.810 #undef SPDK_CONFIG_FUZZER 00:09:09.810 #define SPDK_CONFIG_FUZZER_LIB 00:09:09.810 #define SPDK_CONFIG_GOLANG 1 00:09:09.810 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:09.810 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:09.810 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:09.810 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:09.810 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:09.810 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:09.810 #undef SPDK_CONFIG_HAVE_LZ4 00:09:09.810 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:09.810 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:09.810 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:09.810 #define SPDK_CONFIG_IDXD 1 00:09:09.810 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:09.810 #undef SPDK_CONFIG_IPSEC_MB 00:09:09.810 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:09.810 #define SPDK_CONFIG_ISAL 1 00:09:09.810 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:09.810 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:09.810 #define SPDK_CONFIG_LIBDIR 00:09:09.810 #undef SPDK_CONFIG_LTO 00:09:09.810 #define SPDK_CONFIG_MAX_LCORES 128 00:09:09.810 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:09.810 #define SPDK_CONFIG_NVME_CUSE 1 00:09:09.810 #undef SPDK_CONFIG_OCF 00:09:09.810 #define SPDK_CONFIG_OCF_PATH 00:09:09.810 #define SPDK_CONFIG_OPENSSL_PATH 00:09:09.810 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:09.810 #define SPDK_CONFIG_PGO_DIR 00:09:09.810 #undef SPDK_CONFIG_PGO_USE 00:09:09.810 #define SPDK_CONFIG_PREFIX /usr/local 00:09:09.810 #undef SPDK_CONFIG_RAID5F 00:09:09.810 #undef SPDK_CONFIG_RBD 00:09:09.810 #define SPDK_CONFIG_RDMA 1 00:09:09.810 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:09.810 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:09.810 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:09.810 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:09.810 #define SPDK_CONFIG_SHARED 1 00:09:09.810 #undef SPDK_CONFIG_SMA 00:09:09.810 #define SPDK_CONFIG_TESTS 1 00:09:09.810 #undef SPDK_CONFIG_TSAN 00:09:09.810 #define SPDK_CONFIG_UBLK 1 00:09:09.810 #define SPDK_CONFIG_UBSAN 1 00:09:09.810 #undef SPDK_CONFIG_UNIT_TESTS 00:09:09.810 #undef SPDK_CONFIG_URING 00:09:09.810 #define SPDK_CONFIG_URING_PATH 00:09:09.810 #undef SPDK_CONFIG_URING_ZNS 00:09:09.810 #define SPDK_CONFIG_USDT 1 00:09:09.810 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:09.810 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:09.810 #undef SPDK_CONFIG_VFIO_USER 00:09:09.810 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:09.810 #define SPDK_CONFIG_VHOST 1 00:09:09.810 #define SPDK_CONFIG_VIRTIO 1 00:09:09.810 #undef SPDK_CONFIG_VTUNE 00:09:09.810 #define SPDK_CONFIG_VTUNE_DIR 00:09:09.810 #define SPDK_CONFIG_WERROR 1 00:09:09.810 #define SPDK_CONFIG_WPDK_DIR 00:09:09.810 #undef SPDK_CONFIG_XNVME 00:09:09.810 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.810 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:09.811 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:09.812 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 71092 ]] 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 71092 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ZWXroC 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.ZWXroC/tests/target /tmp/spdk.ZWXroC 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979062272 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5590265856 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256394240 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979062272 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5590265856 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:09:09.813 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266286080 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=96768241664 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=2934538240 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:09.814 * Looking for test storage... 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13979062272 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.814 --rc genhtml_branch_coverage=1 00:09:09.814 --rc genhtml_function_coverage=1 00:09:09.814 --rc genhtml_legend=1 00:09:09.814 --rc geninfo_all_blocks=1 00:09:09.814 --rc geninfo_unexecuted_blocks=1 00:09:09.814 00:09:09.814 ' 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.814 --rc genhtml_branch_coverage=1 00:09:09.814 --rc genhtml_function_coverage=1 00:09:09.814 --rc genhtml_legend=1 00:09:09.814 --rc geninfo_all_blocks=1 00:09:09.814 --rc geninfo_unexecuted_blocks=1 00:09:09.814 00:09:09.814 ' 00:09:09.814 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.814 --rc genhtml_branch_coverage=1 00:09:09.814 --rc genhtml_function_coverage=1 00:09:09.814 --rc genhtml_legend=1 00:09:09.814 --rc geninfo_all_blocks=1 00:09:09.814 --rc geninfo_unexecuted_blocks=1 00:09:09.814 00:09:09.814 ' 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.815 --rc genhtml_branch_coverage=1 00:09:09.815 --rc genhtml_function_coverage=1 00:09:09.815 --rc genhtml_legend=1 00:09:09.815 --rc geninfo_all_blocks=1 00:09:09.815 --rc geninfo_unexecuted_blocks=1 00:09:09.815 00:09:09.815 ' 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.815 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:09.815 Cannot find device "nvmf_init_br" 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:09.815 Cannot find device "nvmf_init_br2" 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:09.815 Cannot find device "nvmf_tgt_br" 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.815 Cannot find device "nvmf_tgt_br2" 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:09.815 Cannot find device "nvmf_init_br" 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:09.815 Cannot find device "nvmf_init_br2" 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:09:09.815 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:09.816 Cannot find device "nvmf_tgt_br" 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:09.816 Cannot find device "nvmf_tgt_br2" 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:09.816 Cannot find device "nvmf_br" 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:09.816 Cannot find device "nvmf_init_if" 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:09.816 Cannot find device "nvmf_init_if2" 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:09.816 12:12:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:09.816 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.816 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:09:09.816 00:09:09.816 --- 10.0.0.3 ping statistics --- 00:09:09.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.816 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:09.816 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:09.816 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:09:09.816 00:09:09.816 --- 10.0.0.4 ping statistics --- 00:09:09.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.816 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:09.816 00:09:09.816 --- 10.0.0.1 ping statistics --- 00:09:09.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.816 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:09.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:09:09.816 00:09:09.816 --- 10.0.0.2 ping statistics --- 00:09:09.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.816 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:09.816 ************************************ 00:09:09.816 START TEST nvmf_filesystem_no_in_capsule 00:09:09.816 ************************************ 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71276 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71276 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71276 ']' 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.816 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.816 [2024-12-05 12:12:40.210316] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:09:09.816 [2024-12-05 12:12:40.210414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.817 [2024-12-05 12:12:40.362836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.817 [2024-12-05 12:12:40.409392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.817 [2024-12-05 12:12:40.409459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.817 [2024-12-05 12:12:40.409488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.817 [2024-12-05 12:12:40.409496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.817 [2024-12-05 12:12:40.409502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.817 [2024-12-05 12:12:40.414314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.817 [2024-12-05 12:12:40.414464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.817 [2024-12-05 12:12:40.414600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.817 [2024-12-05 12:12:40.414604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.817 [2024-12-05 12:12:40.581801] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.817 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.076 Malloc1 00:09:10.076 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.076 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.076 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.076 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.076 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.076 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:10.076 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.076 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.076 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.077 [2024-12-05 12:12:40.761845] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:10.077 { 00:09:10.077 "aliases": [ 00:09:10.077 "a4e89a44-2c3e-47e0-8d63-ecf37e7877ce" 00:09:10.077 ], 00:09:10.077 "assigned_rate_limits": { 00:09:10.077 "r_mbytes_per_sec": 0, 00:09:10.077 "rw_ios_per_sec": 0, 00:09:10.077 "rw_mbytes_per_sec": 0, 00:09:10.077 "w_mbytes_per_sec": 0 00:09:10.077 }, 00:09:10.077 "block_size": 512, 00:09:10.077 "claim_type": "exclusive_write", 00:09:10.077 "claimed": true, 00:09:10.077 "driver_specific": {}, 00:09:10.077 "memory_domains": [ 00:09:10.077 { 00:09:10.077 "dma_device_id": "system", 00:09:10.077 "dma_device_type": 1 00:09:10.077 }, 00:09:10.077 { 00:09:10.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.077 "dma_device_type": 2 00:09:10.077 } 00:09:10.077 ], 00:09:10.077 "name": "Malloc1", 00:09:10.077 "num_blocks": 1048576, 00:09:10.077 "product_name": "Malloc disk", 00:09:10.077 "supported_io_types": { 00:09:10.077 "abort": true, 00:09:10.077 "compare": false, 00:09:10.077 "compare_and_write": false, 00:09:10.077 "copy": true, 00:09:10.077 "flush": true, 00:09:10.077 "get_zone_info": false, 00:09:10.077 "nvme_admin": false, 00:09:10.077 "nvme_io": false, 00:09:10.077 "nvme_io_md": false, 00:09:10.077 "nvme_iov_md": false, 00:09:10.077 "read": true, 00:09:10.077 "reset": true, 00:09:10.077 "seek_data": false, 00:09:10.077 "seek_hole": false, 00:09:10.077 "unmap": true, 00:09:10.077 "write": true, 00:09:10.077 "write_zeroes": true, 00:09:10.077 "zcopy": true, 00:09:10.077 "zone_append": false, 00:09:10.077 "zone_management": false 00:09:10.077 }, 00:09:10.077 "uuid": "a4e89a44-2c3e-47e0-8d63-ecf37e7877ce", 00:09:10.077 "zoned": false 00:09:10.077 } 00:09:10.077 ]' 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:10.077 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:10.336 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.336 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:10.336 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.336 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:10.336 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:12.239 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:12.239 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:12.239 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.239 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:12.239 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.239 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:12.526 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.459 ************************************ 00:09:13.459 START TEST filesystem_ext4 00:09:13.459 ************************************ 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:13.459 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:13.459 mke2fs 1.47.0 (5-Feb-2023) 00:09:13.717 Discarding device blocks: 0/522240 done 00:09:13.717 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:13.717 Filesystem UUID: 41fea32a-61cb-4c72-8eae-d1f43df7654b 00:09:13.717 Superblock backups stored on blocks: 00:09:13.717 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:13.717 00:09:13.717 Allocating group tables: 0/64 done 00:09:13.717 Writing inode tables: 0/64 done 00:09:13.717 Creating journal (8192 blocks): done 00:09:13.717 Writing superblocks and filesystem accounting information: 0/64 done 00:09:13.717 00:09:13.717 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:13.717 12:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:18.979 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:18.979 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:18.979 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71276 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:19.237 00:09:19.237 real 0m5.647s 00:09:19.237 user 0m0.014s 00:09:19.237 sys 0m0.072s 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:19.237 ************************************ 00:09:19.237 END TEST filesystem_ext4 00:09:19.237 ************************************ 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.237 ************************************ 00:09:19.237 START TEST filesystem_btrfs 00:09:19.237 ************************************ 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:19.237 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:19.238 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:19.238 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:19.238 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:19.238 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:19.238 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:19.238 12:12:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:19.238 btrfs-progs v6.8.1 00:09:19.238 See https://btrfs.readthedocs.io for more information. 00:09:19.238 00:09:19.238 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:19.238 NOTE: several default settings have changed in version 5.15, please make sure 00:09:19.238 this does not affect your deployments: 00:09:19.238 - DUP for metadata (-m dup) 00:09:19.238 - enabled no-holes (-O no-holes) 00:09:19.238 - enabled free-space-tree (-R free-space-tree) 00:09:19.238 00:09:19.238 Label: (null) 00:09:19.238 UUID: b3f3797f-74df-4df3-b9b0-db521b5b9ac6 00:09:19.238 Node size: 16384 00:09:19.238 Sector size: 4096 (CPU page size: 4096) 00:09:19.238 Filesystem size: 510.00MiB 00:09:19.238 Block group profiles: 00:09:19.238 Data: single 8.00MiB 00:09:19.238 Metadata: DUP 32.00MiB 00:09:19.238 System: DUP 8.00MiB 00:09:19.238 SSD detected: yes 00:09:19.238 Zoned device: no 00:09:19.238 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:19.238 Checksum: crc32c 00:09:19.238 Number of devices: 1 00:09:19.238 Devices: 00:09:19.238 ID SIZE PATH 00:09:19.238 1 510.00MiB /dev/nvme0n1p1 00:09:19.238 00:09:19.238 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:19.238 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:19.238 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:19.238 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71276 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:19.497 00:09:19.497 real 0m0.216s 00:09:19.497 user 0m0.022s 00:09:19.497 sys 0m0.053s 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:19.497 ************************************ 00:09:19.497 END TEST filesystem_btrfs 00:09:19.497 ************************************ 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.497 ************************************ 00:09:19.497 START TEST filesystem_xfs 00:09:19.497 ************************************ 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:19.497 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:19.498 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:19.498 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:19.498 = sectsz=512 attr=2, projid32bit=1 00:09:19.498 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:19.498 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:19.498 data = bsize=4096 blocks=130560, imaxpct=25 00:09:19.498 = sunit=0 swidth=0 blks 00:09:19.498 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:19.498 log =internal log bsize=4096 blocks=16384, version=2 00:09:19.498 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:19.498 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:20.433 Discarding blocks...Done. 00:09:20.433 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:20.433 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71276 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:23.004 00:09:23.004 real 0m3.194s 00:09:23.004 user 0m0.020s 00:09:23.004 sys 0m0.053s 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:23.004 ************************************ 00:09:23.004 END TEST filesystem_xfs 00:09:23.004 ************************************ 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71276 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71276 ']' 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71276 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71276 00:09:23.004 killing process with pid 71276 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71276' 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 71276 00:09:23.004 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 71276 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:23.263 00:09:23.263 real 0m13.805s 00:09:23.263 user 0m52.632s 00:09:23.263 sys 0m2.055s 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.263 ************************************ 00:09:23.263 END TEST nvmf_filesystem_no_in_capsule 00:09:23.263 ************************************ 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:23.263 ************************************ 00:09:23.263 START TEST nvmf_filesystem_in_capsule 00:09:23.263 ************************************ 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.263 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.263 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71634 00:09:23.263 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71634 00:09:23.263 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:23.263 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71634 ']' 00:09:23.263 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.263 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.264 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.264 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.264 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.264 [2024-12-05 12:12:54.056933] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:09:23.264 [2024-12-05 12:12:54.057038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.523 [2024-12-05 12:12:54.196053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.523 [2024-12-05 12:12:54.239917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.523 [2024-12-05 12:12:54.240241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.523 [2024-12-05 12:12:54.240434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.523 [2024-12-05 12:12:54.240555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.523 [2024-12-05 12:12:54.240589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.523 [2024-12-05 12:12:54.241757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.523 [2024-12-05 12:12:54.241885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.523 [2024-12-05 12:12:54.242614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.523 [2024-12-05 12:12:54.242657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.523 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.523 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:23.523 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:23.523 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.523 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.782 [2024-12-05 12:12:54.412804] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.782 Malloc1 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.782 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.783 [2024-12-05 12:12:54.594098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:23.783 { 00:09:23.783 "aliases": [ 00:09:23.783 "84902bb9-a827-46c0-bdb8-197cc6559a08" 00:09:23.783 ], 00:09:23.783 "assigned_rate_limits": { 00:09:23.783 "r_mbytes_per_sec": 0, 00:09:23.783 "rw_ios_per_sec": 0, 00:09:23.783 "rw_mbytes_per_sec": 0, 00:09:23.783 "w_mbytes_per_sec": 0 00:09:23.783 }, 00:09:23.783 "block_size": 512, 00:09:23.783 "claim_type": "exclusive_write", 00:09:23.783 "claimed": true, 00:09:23.783 "driver_specific": {}, 00:09:23.783 "memory_domains": [ 00:09:23.783 { 00:09:23.783 "dma_device_id": "system", 00:09:23.783 "dma_device_type": 1 00:09:23.783 }, 00:09:23.783 { 00:09:23.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.783 "dma_device_type": 2 00:09:23.783 } 00:09:23.783 ], 00:09:23.783 "name": "Malloc1", 00:09:23.783 "num_blocks": 1048576, 00:09:23.783 "product_name": "Malloc disk", 00:09:23.783 "supported_io_types": { 00:09:23.783 "abort": true, 00:09:23.783 "compare": false, 00:09:23.783 "compare_and_write": false, 00:09:23.783 "copy": true, 00:09:23.783 "flush": true, 00:09:23.783 "get_zone_info": false, 00:09:23.783 "nvme_admin": false, 00:09:23.783 "nvme_io": false, 00:09:23.783 "nvme_io_md": false, 00:09:23.783 "nvme_iov_md": false, 00:09:23.783 "read": true, 00:09:23.783 "reset": true, 00:09:23.783 "seek_data": false, 00:09:23.783 "seek_hole": false, 00:09:23.783 "unmap": true, 00:09:23.783 "write": true, 00:09:23.783 "write_zeroes": true, 00:09:23.783 "zcopy": true, 00:09:23.783 "zone_append": false, 00:09:23.783 "zone_management": false 00:09:23.783 }, 00:09:23.783 "uuid": "84902bb9-a827-46c0-bdb8-197cc6559a08", 00:09:23.783 "zoned": false 00:09:23.783 } 00:09:23.783 ]' 00:09:23.783 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:24.042 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:24.042 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:24.042 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:24.042 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:24.042 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:24.042 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:24.042 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:24.042 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:24.042 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:24.042 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:24.042 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:24.042 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:26.570 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:26.571 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:26.571 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.506 ************************************ 00:09:27.506 START TEST filesystem_in_capsule_ext4 00:09:27.506 ************************************ 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:27.506 mke2fs 1.47.0 (5-Feb-2023) 00:09:27.506 Discarding device blocks: 0/522240 done 00:09:27.506 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:27.506 Filesystem UUID: 5b74622f-5f27-44f2-93b1-5d647fdce9a8 00:09:27.506 Superblock backups stored on blocks: 00:09:27.506 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:27.506 00:09:27.506 Allocating group tables: 0/64 done 00:09:27.506 Writing inode tables: 0/64 done 00:09:27.506 Creating journal (8192 blocks): done 00:09:27.506 Writing superblocks and filesystem accounting information: 0/64 done 00:09:27.506 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:27.506 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:32.773 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 71634 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:33.031 00:09:33.031 real 0m5.655s 00:09:33.031 user 0m0.022s 00:09:33.031 sys 0m0.064s 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:33.031 ************************************ 00:09:33.031 END TEST filesystem_in_capsule_ext4 00:09:33.031 ************************************ 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:33.031 ************************************ 00:09:33.031 START TEST filesystem_in_capsule_btrfs 00:09:33.031 ************************************ 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:33.031 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:33.290 btrfs-progs v6.8.1 00:09:33.290 See https://btrfs.readthedocs.io for more information. 00:09:33.290 00:09:33.290 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:33.290 NOTE: several default settings have changed in version 5.15, please make sure 00:09:33.290 this does not affect your deployments: 00:09:33.290 - DUP for metadata (-m dup) 00:09:33.290 - enabled no-holes (-O no-holes) 00:09:33.290 - enabled free-space-tree (-R free-space-tree) 00:09:33.290 00:09:33.290 Label: (null) 00:09:33.290 UUID: 5045066a-9963-4548-a24a-af2563d59d5a 00:09:33.290 Node size: 16384 00:09:33.290 Sector size: 4096 (CPU page size: 4096) 00:09:33.290 Filesystem size: 510.00MiB 00:09:33.290 Block group profiles: 00:09:33.290 Data: single 8.00MiB 00:09:33.290 Metadata: DUP 32.00MiB 00:09:33.290 System: DUP 8.00MiB 00:09:33.290 SSD detected: yes 00:09:33.290 Zoned device: no 00:09:33.290 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:33.290 Checksum: crc32c 00:09:33.290 Number of devices: 1 00:09:33.290 Devices: 00:09:33.290 ID SIZE PATH 00:09:33.290 1 510.00MiB /dev/nvme0n1p1 00:09:33.290 00:09:33.290 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:33.290 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:33.290 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:33.290 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:33.290 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:33.290 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:33.290 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:33.290 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:33.291 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 71634 00:09:33.291 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:33.291 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:33.291 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:33.291 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:33.291 ************************************ 00:09:33.291 END TEST filesystem_in_capsule_btrfs 00:09:33.291 ************************************ 00:09:33.291 00:09:33.291 real 0m0.206s 00:09:33.291 user 0m0.021s 00:09:33.291 sys 0m0.059s 00:09:33.291 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.291 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:33.291 ************************************ 00:09:33.291 START TEST filesystem_in_capsule_xfs 00:09:33.291 ************************************ 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:33.291 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:33.291 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:33.291 = sectsz=512 attr=2, projid32bit=1 00:09:33.291 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:33.291 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:33.291 data = bsize=4096 blocks=130560, imaxpct=25 00:09:33.291 = sunit=0 swidth=0 blks 00:09:33.291 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:33.291 log =internal log bsize=4096 blocks=16384, version=2 00:09:33.291 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:33.291 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:34.227 Discarding blocks...Done. 00:09:34.227 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:34.227 12:13:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 71634 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:36.128 ************************************ 00:09:36.128 END TEST filesystem_in_capsule_xfs 00:09:36.128 ************************************ 00:09:36.128 00:09:36.128 real 0m2.661s 00:09:36.128 user 0m0.021s 00:09:36.128 sys 0m0.060s 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 71634 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71634 ']' 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71634 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71634 00:09:36.128 killing process with pid 71634 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71634' 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 71634 00:09:36.128 12:13:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 71634 00:09:36.694 ************************************ 00:09:36.694 END TEST nvmf_filesystem_in_capsule 00:09:36.694 ************************************ 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:36.694 00:09:36.694 real 0m13.262s 00:09:36.694 user 0m50.638s 00:09:36.694 sys 0m1.998s 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:36.694 rmmod nvme_tcp 00:09:36.694 rmmod nvme_fabrics 00:09:36.694 rmmod nvme_keyring 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:36.694 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:09:36.953 00:09:36.953 real 0m28.370s 00:09:36.953 user 1m43.674s 00:09:36.953 sys 0m4.593s 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.953 ************************************ 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:36.953 END TEST nvmf_filesystem 00:09:36.953 ************************************ 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:36.953 ************************************ 00:09:36.953 START TEST nvmf_target_discovery 00:09:36.953 ************************************ 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:36.953 * Looking for test storage... 00:09:36.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.953 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.211 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.211 --rc genhtml_branch_coverage=1 00:09:37.211 --rc genhtml_function_coverage=1 00:09:37.211 --rc genhtml_legend=1 00:09:37.212 --rc geninfo_all_blocks=1 00:09:37.212 --rc geninfo_unexecuted_blocks=1 00:09:37.212 00:09:37.212 ' 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.212 --rc genhtml_branch_coverage=1 00:09:37.212 --rc genhtml_function_coverage=1 00:09:37.212 --rc genhtml_legend=1 00:09:37.212 --rc geninfo_all_blocks=1 00:09:37.212 --rc geninfo_unexecuted_blocks=1 00:09:37.212 00:09:37.212 ' 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.212 --rc genhtml_branch_coverage=1 00:09:37.212 --rc genhtml_function_coverage=1 00:09:37.212 --rc genhtml_legend=1 00:09:37.212 --rc geninfo_all_blocks=1 00:09:37.212 --rc geninfo_unexecuted_blocks=1 00:09:37.212 00:09:37.212 ' 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.212 --rc genhtml_branch_coverage=1 00:09:37.212 --rc genhtml_function_coverage=1 00:09:37.212 --rc genhtml_legend=1 00:09:37.212 --rc geninfo_all_blocks=1 00:09:37.212 --rc geninfo_unexecuted_blocks=1 00:09:37.212 00:09:37.212 ' 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.212 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:37.212 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:37.213 Cannot find device "nvmf_init_br" 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:37.213 Cannot find device "nvmf_init_br2" 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:37.213 Cannot find device "nvmf_tgt_br" 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.213 Cannot find device "nvmf_tgt_br2" 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:37.213 Cannot find device "nvmf_init_br" 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:37.213 Cannot find device "nvmf_init_br2" 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:37.213 Cannot find device "nvmf_tgt_br" 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:37.213 Cannot find device "nvmf_tgt_br2" 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:37.213 Cannot find device "nvmf_br" 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:37.213 Cannot find device "nvmf_init_if" 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:37.213 Cannot find device "nvmf_init_if2" 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:09:37.213 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.213 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:09:37.213 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:37.213 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:37.213 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:37.213 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:37.213 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:37.213 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:37.213 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:37.213 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:37.213 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:37.213 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:37.213 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:37.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:37.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:09:37.472 00:09:37.472 --- 10.0.0.3 ping statistics --- 00:09:37.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.472 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:37.472 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:37.472 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:09:37.472 00:09:37.472 --- 10.0.0.4 ping statistics --- 00:09:37.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.472 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:37.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:37.472 00:09:37.472 --- 10.0.0.1 ping statistics --- 00:09:37.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.472 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:37.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:09:37.472 00:09:37.472 --- 10.0.0.2 ping statistics --- 00:09:37.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.472 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=72208 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 72208 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 72208 ']' 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.472 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.473 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.473 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.473 [2024-12-05 12:13:08.305828] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:09:37.473 [2024-12-05 12:13:08.305925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.731 [2024-12-05 12:13:08.453503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.731 [2024-12-05 12:13:08.498812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.731 [2024-12-05 12:13:08.498891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.731 [2024-12-05 12:13:08.498902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.731 [2024-12-05 12:13:08.498909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.731 [2024-12-05 12:13:08.498916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.731 [2024-12-05 12:13:08.500183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.731 [2024-12-05 12:13:08.500345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.731 [2024-12-05 12:13:08.500418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.731 [2024-12-05 12:13:08.500419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 [2024-12-05 12:13:08.680126] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 Null1 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 [2024-12-05 12:13:08.724429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 Null2 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 Null3 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.990 Null4 00:09:37.990 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.991 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -a 10.0.0.3 -s 4420 00:09:38.248 00:09:38.248 Discovery Log Number of Records 6, Generation counter 6 00:09:38.248 =====Discovery Log Entry 0====== 00:09:38.248 trtype: tcp 00:09:38.248 adrfam: ipv4 00:09:38.248 subtype: current discovery subsystem 00:09:38.248 treq: not required 00:09:38.248 portid: 0 00:09:38.248 trsvcid: 4420 00:09:38.248 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:38.248 traddr: 10.0.0.3 00:09:38.248 eflags: explicit discovery connections, duplicate discovery information 00:09:38.248 sectype: none 00:09:38.248 =====Discovery Log Entry 1====== 00:09:38.248 trtype: tcp 00:09:38.248 adrfam: ipv4 00:09:38.248 subtype: nvme subsystem 00:09:38.248 treq: not required 00:09:38.248 portid: 0 00:09:38.248 trsvcid: 4420 00:09:38.248 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:38.248 traddr: 10.0.0.3 00:09:38.248 eflags: none 00:09:38.248 sectype: none 00:09:38.248 =====Discovery Log Entry 2====== 00:09:38.248 trtype: tcp 00:09:38.248 adrfam: ipv4 00:09:38.248 subtype: nvme subsystem 00:09:38.248 treq: not required 00:09:38.248 portid: 0 00:09:38.248 trsvcid: 4420 00:09:38.248 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:38.248 traddr: 10.0.0.3 00:09:38.249 eflags: none 00:09:38.249 sectype: none 00:09:38.249 =====Discovery Log Entry 3====== 00:09:38.249 trtype: tcp 00:09:38.249 adrfam: ipv4 00:09:38.249 subtype: nvme subsystem 00:09:38.249 treq: not required 00:09:38.249 portid: 0 00:09:38.249 trsvcid: 4420 00:09:38.249 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:38.249 traddr: 10.0.0.3 00:09:38.249 eflags: none 00:09:38.249 sectype: none 00:09:38.249 =====Discovery Log Entry 4====== 00:09:38.249 trtype: tcp 00:09:38.249 adrfam: ipv4 00:09:38.249 subtype: nvme subsystem 00:09:38.249 treq: not required 00:09:38.249 portid: 0 00:09:38.249 trsvcid: 4420 00:09:38.249 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:38.249 traddr: 10.0.0.3 00:09:38.249 eflags: none 00:09:38.249 sectype: none 00:09:38.249 =====Discovery Log Entry 5====== 00:09:38.249 trtype: tcp 00:09:38.249 adrfam: ipv4 00:09:38.249 subtype: discovery subsystem referral 00:09:38.249 treq: not required 00:09:38.249 portid: 0 00:09:38.249 trsvcid: 4430 00:09:38.249 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:38.249 traddr: 10.0.0.3 00:09:38.249 eflags: none 00:09:38.249 sectype: none 00:09:38.249 Perform nvmf subsystem discovery via RPC 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.249 [ 00:09:38.249 { 00:09:38.249 "allow_any_host": true, 00:09:38.249 "hosts": [], 00:09:38.249 "listen_addresses": [ 00:09:38.249 { 00:09:38.249 "adrfam": "IPv4", 00:09:38.249 "traddr": "10.0.0.3", 00:09:38.249 "trsvcid": "4420", 00:09:38.249 "trtype": "TCP" 00:09:38.249 } 00:09:38.249 ], 00:09:38.249 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:38.249 "subtype": "Discovery" 00:09:38.249 }, 00:09:38.249 { 00:09:38.249 "allow_any_host": true, 00:09:38.249 "hosts": [], 00:09:38.249 "listen_addresses": [ 00:09:38.249 { 00:09:38.249 "adrfam": "IPv4", 00:09:38.249 "traddr": "10.0.0.3", 00:09:38.249 "trsvcid": "4420", 00:09:38.249 "trtype": "TCP" 00:09:38.249 } 00:09:38.249 ], 00:09:38.249 "max_cntlid": 65519, 00:09:38.249 "max_namespaces": 32, 00:09:38.249 "min_cntlid": 1, 00:09:38.249 "model_number": "SPDK bdev Controller", 00:09:38.249 "namespaces": [ 00:09:38.249 { 00:09:38.249 "bdev_name": "Null1", 00:09:38.249 "name": "Null1", 00:09:38.249 "nguid": "85559CFED2BD45C69A5E0B2A37623DE3", 00:09:38.249 "nsid": 1, 00:09:38.249 "uuid": "85559cfe-d2bd-45c6-9a5e-0b2a37623de3" 00:09:38.249 } 00:09:38.249 ], 00:09:38.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:38.249 "serial_number": "SPDK00000000000001", 00:09:38.249 "subtype": "NVMe" 00:09:38.249 }, 00:09:38.249 { 00:09:38.249 "allow_any_host": true, 00:09:38.249 "hosts": [], 00:09:38.249 "listen_addresses": [ 00:09:38.249 { 00:09:38.249 "adrfam": "IPv4", 00:09:38.249 "traddr": "10.0.0.3", 00:09:38.249 "trsvcid": "4420", 00:09:38.249 "trtype": "TCP" 00:09:38.249 } 00:09:38.249 ], 00:09:38.249 "max_cntlid": 65519, 00:09:38.249 "max_namespaces": 32, 00:09:38.249 "min_cntlid": 1, 00:09:38.249 "model_number": "SPDK bdev Controller", 00:09:38.249 "namespaces": [ 00:09:38.249 { 00:09:38.249 "bdev_name": "Null2", 00:09:38.249 "name": "Null2", 00:09:38.249 "nguid": "40EF9DC0D31D4003BA75883CC5D6726B", 00:09:38.249 "nsid": 1, 00:09:38.249 "uuid": "40ef9dc0-d31d-4003-ba75-883cc5d6726b" 00:09:38.249 } 00:09:38.249 ], 00:09:38.249 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:38.249 "serial_number": "SPDK00000000000002", 00:09:38.249 "subtype": "NVMe" 00:09:38.249 }, 00:09:38.249 { 00:09:38.249 "allow_any_host": true, 00:09:38.249 "hosts": [], 00:09:38.249 "listen_addresses": [ 00:09:38.249 { 00:09:38.249 "adrfam": "IPv4", 00:09:38.249 "traddr": "10.0.0.3", 00:09:38.249 "trsvcid": "4420", 00:09:38.249 "trtype": "TCP" 00:09:38.249 } 00:09:38.249 ], 00:09:38.249 "max_cntlid": 65519, 00:09:38.249 "max_namespaces": 32, 00:09:38.249 "min_cntlid": 1, 00:09:38.249 "model_number": "SPDK bdev Controller", 00:09:38.249 "namespaces": [ 00:09:38.249 { 00:09:38.249 "bdev_name": "Null3", 00:09:38.249 "name": "Null3", 00:09:38.249 "nguid": "4C6FE6C48146427FBF5A40F2E1DF5802", 00:09:38.249 "nsid": 1, 00:09:38.249 "uuid": "4c6fe6c4-8146-427f-bf5a-40f2e1df5802" 00:09:38.249 } 00:09:38.249 ], 00:09:38.249 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:38.249 "serial_number": "SPDK00000000000003", 00:09:38.249 "subtype": "NVMe" 00:09:38.249 }, 00:09:38.249 { 00:09:38.249 "allow_any_host": true, 00:09:38.249 "hosts": [], 00:09:38.249 "listen_addresses": [ 00:09:38.249 { 00:09:38.249 "adrfam": "IPv4", 00:09:38.249 "traddr": "10.0.0.3", 00:09:38.249 "trsvcid": "4420", 00:09:38.249 "trtype": "TCP" 00:09:38.249 } 00:09:38.249 ], 00:09:38.249 "max_cntlid": 65519, 00:09:38.249 "max_namespaces": 32, 00:09:38.249 "min_cntlid": 1, 00:09:38.249 "model_number": "SPDK bdev Controller", 00:09:38.249 "namespaces": [ 00:09:38.249 { 00:09:38.249 "bdev_name": "Null4", 00:09:38.249 "name": "Null4", 00:09:38.249 "nguid": "BD3228587F1B479BB07E884535C92240", 00:09:38.249 "nsid": 1, 00:09:38.249 "uuid": "bd322858-7f1b-479b-b07e-884535c92240" 00:09:38.249 } 00:09:38.249 ], 00:09:38.249 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:38.249 "serial_number": "SPDK00000000000004", 00:09:38.249 "subtype": "NVMe" 00:09:38.249 } 00:09:38.249 ] 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:38.249 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.249 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:38.250 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.508 rmmod nvme_tcp 00:09:38.508 rmmod nvme_fabrics 00:09:38.508 rmmod nvme_keyring 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 72208 ']' 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 72208 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 72208 ']' 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 72208 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72208 00:09:38.508 killing process with pid 72208 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72208' 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 72208 00:09:38.508 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 72208 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:38.766 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:09:39.024 00:09:39.024 real 0m2.041s 00:09:39.024 user 0m3.956s 00:09:39.024 sys 0m0.708s 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:39.024 ************************************ 00:09:39.024 END TEST nvmf_target_discovery 00:09:39.024 ************************************ 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:39.024 ************************************ 00:09:39.024 START TEST nvmf_referrals 00:09:39.024 ************************************ 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:39.024 * Looking for test storage... 00:09:39.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:09:39.024 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:39.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.293 --rc genhtml_branch_coverage=1 00:09:39.293 --rc genhtml_function_coverage=1 00:09:39.293 --rc genhtml_legend=1 00:09:39.293 --rc geninfo_all_blocks=1 00:09:39.293 --rc geninfo_unexecuted_blocks=1 00:09:39.293 00:09:39.293 ' 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:39.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.293 --rc genhtml_branch_coverage=1 00:09:39.293 --rc genhtml_function_coverage=1 00:09:39.293 --rc genhtml_legend=1 00:09:39.293 --rc geninfo_all_blocks=1 00:09:39.293 --rc geninfo_unexecuted_blocks=1 00:09:39.293 00:09:39.293 ' 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:39.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.293 --rc genhtml_branch_coverage=1 00:09:39.293 --rc genhtml_function_coverage=1 00:09:39.293 --rc genhtml_legend=1 00:09:39.293 --rc geninfo_all_blocks=1 00:09:39.293 --rc geninfo_unexecuted_blocks=1 00:09:39.293 00:09:39.293 ' 00:09:39.293 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:39.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.294 --rc genhtml_branch_coverage=1 00:09:39.294 --rc genhtml_function_coverage=1 00:09:39.294 --rc genhtml_legend=1 00:09:39.294 --rc geninfo_all_blocks=1 00:09:39.294 --rc geninfo_unexecuted_blocks=1 00:09:39.294 00:09:39.294 ' 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.294 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:39.294 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:39.295 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:39.295 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:39.295 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.295 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:39.295 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:39.295 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:39.295 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:39.295 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:39.295 Cannot find device "nvmf_init_br" 00:09:39.295 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:09:39.295 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:39.295 Cannot find device "nvmf_init_br2" 00:09:39.295 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:09:39.295 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:39.295 Cannot find device "nvmf_tgt_br" 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:39.295 Cannot find device "nvmf_tgt_br2" 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:39.295 Cannot find device "nvmf_init_br" 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:39.295 Cannot find device "nvmf_init_br2" 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:39.295 Cannot find device "nvmf_tgt_br" 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:39.295 Cannot find device "nvmf_tgt_br2" 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:39.295 Cannot find device "nvmf_br" 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:39.295 Cannot find device "nvmf_init_if" 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:39.295 Cannot find device "nvmf_init_if2" 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:39.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:39.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:39.295 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:39.570 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:39.570 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:09:39.570 00:09:39.570 --- 10.0.0.3 ping statistics --- 00:09:39.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.570 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:39.570 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:39.570 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:39.570 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:09:39.570 00:09:39.571 --- 10.0.0.4 ping statistics --- 00:09:39.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.571 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:39.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:09:39.571 00:09:39.571 --- 10.0.0.1 ping statistics --- 00:09:39.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.571 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:39.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:39.571 00:09:39.571 --- 10.0.0.2 ping statistics --- 00:09:39.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.571 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=72474 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 72474 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 72474 ']' 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.571 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:39.829 [2024-12-05 12:13:10.460775] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:09:39.829 [2024-12-05 12:13:10.460871] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.829 [2024-12-05 12:13:10.605369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.829 [2024-12-05 12:13:10.650118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.829 [2024-12-05 12:13:10.650184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.829 [2024-12-05 12:13:10.650194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.829 [2024-12-05 12:13:10.650201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.829 [2024-12-05 12:13:10.650207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.830 [2024-12-05 12:13:10.651449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.830 [2024-12-05 12:13:10.651582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.830 [2024-12-05 12:13:10.651701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.830 [2024-12-05 12:13:10.651702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.090 [2024-12-05 12:13:10.832170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.090 [2024-12-05 12:13:10.845106] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:40.090 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.350 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:40.350 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:40.350 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:40.350 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:40.350 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:40.350 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -a 10.0.0.3 -s 8009 -o json 00:09:40.350 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:40.350 12:13:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -a 10.0.0.3 -s 8009 -o json 00:09:40.350 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -a 10.0.0.3 -s 8009 -o json 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:40.609 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -a 10.0.0.3 -s 8009 -o json 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -a 10.0.0.3 -s 8009 -o json 00:09:40.868 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:41.127 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:41.127 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:41.127 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.127 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.127 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.127 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -a 10.0.0.3 -s 8009 -o json 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:41.128 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -a 10.0.0.3 -s 8009 -o json 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -a 10.0.0.3 -s 8009 -o json 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -a 10.0.0.3 -s 8009 -o json 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:41.387 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.646 rmmod nvme_tcp 00:09:41.646 rmmod nvme_fabrics 00:09:41.646 rmmod nvme_keyring 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 72474 ']' 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 72474 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 72474 ']' 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 72474 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.646 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72474 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72474' 00:09:41.906 killing process with pid 72474 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 72474 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 72474 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:41.906 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:09:42.165 00:09:42.165 real 0m3.198s 00:09:42.165 user 0m9.174s 00:09:42.165 sys 0m1.019s 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:42.165 ************************************ 00:09:42.165 END TEST nvmf_referrals 00:09:42.165 ************************************ 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:42.165 ************************************ 00:09:42.165 START TEST nvmf_connect_disconnect 00:09:42.165 ************************************ 00:09:42.165 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:42.426 * Looking for test storage... 00:09:42.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.426 --rc genhtml_branch_coverage=1 00:09:42.426 --rc genhtml_function_coverage=1 00:09:42.426 --rc genhtml_legend=1 00:09:42.426 --rc geninfo_all_blocks=1 00:09:42.426 --rc geninfo_unexecuted_blocks=1 00:09:42.426 00:09:42.426 ' 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.426 --rc genhtml_branch_coverage=1 00:09:42.426 --rc genhtml_function_coverage=1 00:09:42.426 --rc genhtml_legend=1 00:09:42.426 --rc geninfo_all_blocks=1 00:09:42.426 --rc geninfo_unexecuted_blocks=1 00:09:42.426 00:09:42.426 ' 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.426 --rc genhtml_branch_coverage=1 00:09:42.426 --rc genhtml_function_coverage=1 00:09:42.426 --rc genhtml_legend=1 00:09:42.426 --rc geninfo_all_blocks=1 00:09:42.426 --rc geninfo_unexecuted_blocks=1 00:09:42.426 00:09:42.426 ' 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.426 --rc genhtml_branch_coverage=1 00:09:42.426 --rc genhtml_function_coverage=1 00:09:42.426 --rc genhtml_legend=1 00:09:42.426 --rc geninfo_all_blocks=1 00:09:42.426 --rc geninfo_unexecuted_blocks=1 00:09:42.426 00:09:42.426 ' 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.426 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.427 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:42.427 Cannot find device "nvmf_init_br" 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:42.427 Cannot find device "nvmf_init_br2" 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:42.427 Cannot find device "nvmf_tgt_br" 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:42.427 Cannot find device "nvmf_tgt_br2" 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:42.427 Cannot find device "nvmf_init_br" 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:42.427 Cannot find device "nvmf_init_br2" 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:09:42.427 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:42.687 Cannot find device "nvmf_tgt_br" 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:42.687 Cannot find device "nvmf_tgt_br2" 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:42.687 Cannot find device "nvmf_br" 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:42.687 Cannot find device "nvmf_init_if" 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:42.687 Cannot find device "nvmf_init_if2" 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:42.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:42.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:42.687 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:42.688 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:42.948 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:42.948 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.134 ms 00:09:42.948 00:09:42.948 --- 10.0.0.3 ping statistics --- 00:09:42.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.948 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:42.948 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:42.948 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:09:42.948 00:09:42.948 --- 10.0.0.4 ping statistics --- 00:09:42.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.948 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:42.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:42.948 00:09:42.948 --- 10.0.0.1 ping statistics --- 00:09:42.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.948 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:42.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:09:42.948 00:09:42.948 --- 10.0.0.2 ping statistics --- 00:09:42.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.948 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=72824 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 72824 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 72824 ']' 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.948 12:13:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:42.948 [2024-12-05 12:13:13.751842] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:09:42.948 [2024-12-05 12:13:13.751943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.208 [2024-12-05 12:13:13.897623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.208 [2024-12-05 12:13:13.941671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.208 [2024-12-05 12:13:13.941722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.208 [2024-12-05 12:13:13.941732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.208 [2024-12-05 12:13:13.941740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.208 [2024-12-05 12:13:13.941747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.208 [2024-12-05 12:13:13.942825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.208 [2024-12-05 12:13:13.943054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.208 [2024-12-05 12:13:13.943057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.208 [2024-12-05 12:13:13.942881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.208 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.208 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:09:43.208 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:43.208 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.208 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.468 [2024-12-05 12:13:14.123934] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.468 [2024-12-05 12:13:14.191519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:43.468 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:45.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.886 rmmod nvme_tcp 00:09:54.886 rmmod nvme_fabrics 00:09:54.886 rmmod nvme_keyring 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 72824 ']' 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 72824 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 72824 ']' 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 72824 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72824 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.886 killing process with pid 72824 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72824' 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 72824 00:09:54.886 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 72824 00:09:55.165 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:55.166 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:55.166 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:55.166 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:09:55.166 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:09:55.166 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:55.166 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:09:55.166 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.166 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:55.166 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:55.166 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:55.166 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:55.166 12:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:55.166 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:55.166 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:55.166 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:55.166 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:55.166 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:09:55.424 00:09:55.424 real 0m13.214s 00:09:55.424 user 0m47.265s 00:09:55.424 sys 0m1.648s 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:55.424 ************************************ 00:09:55.424 END TEST nvmf_connect_disconnect 00:09:55.424 ************************************ 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:55.424 ************************************ 00:09:55.424 START TEST nvmf_multitarget 00:09:55.424 ************************************ 00:09:55.424 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:55.684 * Looking for test storage... 00:09:55.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:55.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.684 --rc genhtml_branch_coverage=1 00:09:55.684 --rc genhtml_function_coverage=1 00:09:55.684 --rc genhtml_legend=1 00:09:55.684 --rc geninfo_all_blocks=1 00:09:55.684 --rc geninfo_unexecuted_blocks=1 00:09:55.684 00:09:55.684 ' 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:55.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.684 --rc genhtml_branch_coverage=1 00:09:55.684 --rc genhtml_function_coverage=1 00:09:55.684 --rc genhtml_legend=1 00:09:55.684 --rc geninfo_all_blocks=1 00:09:55.684 --rc geninfo_unexecuted_blocks=1 00:09:55.684 00:09:55.684 ' 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:55.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.684 --rc genhtml_branch_coverage=1 00:09:55.684 --rc genhtml_function_coverage=1 00:09:55.684 --rc genhtml_legend=1 00:09:55.684 --rc geninfo_all_blocks=1 00:09:55.684 --rc geninfo_unexecuted_blocks=1 00:09:55.684 00:09:55.684 ' 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:55.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.684 --rc genhtml_branch_coverage=1 00:09:55.684 --rc genhtml_function_coverage=1 00:09:55.684 --rc genhtml_legend=1 00:09:55.684 --rc geninfo_all_blocks=1 00:09:55.684 --rc geninfo_unexecuted_blocks=1 00:09:55.684 00:09:55.684 ' 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.684 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.685 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:55.685 Cannot find device "nvmf_init_br" 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:55.685 Cannot find device "nvmf_init_br2" 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:55.685 Cannot find device "nvmf_tgt_br" 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:55.685 Cannot find device "nvmf_tgt_br2" 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:55.685 Cannot find device "nvmf_init_br" 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:55.685 Cannot find device "nvmf_init_br2" 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:55.685 Cannot find device "nvmf_tgt_br" 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:55.685 Cannot find device "nvmf_tgt_br2" 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:09:55.685 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:55.945 Cannot find device "nvmf_br" 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:55.945 Cannot find device "nvmf_init_if" 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:55.945 Cannot find device "nvmf_init_if2" 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:55.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:55.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:55.945 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:56.204 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:56.204 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:56.204 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:56.204 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:56.204 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:56.204 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:56.204 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:56.204 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:56.204 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:09:56.204 00:09:56.204 --- 10.0.0.3 ping statistics --- 00:09:56.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.204 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:56.204 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:56.204 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:56.204 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:09:56.204 00:09:56.204 --- 10.0.0.4 ping statistics --- 00:09:56.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.204 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:56.204 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:56.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:56.204 00:09:56.204 --- 10.0.0.1 ping statistics --- 00:09:56.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.204 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:56.204 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:56.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:09:56.204 00:09:56.204 --- 10.0.0.2 ping statistics --- 00:09:56.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.204 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:56.204 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.204 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=73266 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 73266 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 73266 ']' 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.205 12:13:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:56.205 [2024-12-05 12:13:26.921884] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:09:56.205 [2024-12-05 12:13:26.921961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.205 [2024-12-05 12:13:27.054161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.464 [2024-12-05 12:13:27.099708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.464 [2024-12-05 12:13:27.099800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.464 [2024-12-05 12:13:27.099810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.464 [2024-12-05 12:13:27.099817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.464 [2024-12-05 12:13:27.099824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.464 [2024-12-05 12:13:27.101082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.464 [2024-12-05 12:13:27.101193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.464 [2024-12-05 12:13:27.101355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.464 [2024-12-05 12:13:27.101360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.464 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.464 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:09:56.464 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.464 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.464 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:56.464 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.464 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:56.464 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:56.464 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:56.722 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:56.722 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:56.722 "nvmf_tgt_1" 00:09:56.722 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:56.982 "nvmf_tgt_2" 00:09:56.982 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:56.982 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:57.241 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:57.241 12:13:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:57.241 true 00:09:57.241 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:57.500 true 00:09:57.501 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:57.501 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:57.501 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:57.501 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:57.501 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:57.501 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.501 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:09:57.501 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.501 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:09:57.501 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.501 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.501 rmmod nvme_tcp 00:09:57.760 rmmod nvme_fabrics 00:09:57.760 rmmod nvme_keyring 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 73266 ']' 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 73266 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 73266 ']' 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 73266 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73266 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.760 killing process with pid 73266 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73266' 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 73266 00:09:57.760 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 73266 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.019 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.278 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:09:58.278 00:09:58.278 real 0m2.626s 00:09:58.278 user 0m7.431s 00:09:58.278 sys 0m0.768s 00:09:58.278 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.278 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:58.278 ************************************ 00:09:58.278 END TEST nvmf_multitarget 00:09:58.278 ************************************ 00:09:58.278 12:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:58.278 12:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.278 12:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.278 12:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:58.278 ************************************ 00:09:58.278 START TEST nvmf_rpc 00:09:58.278 ************************************ 00:09:58.278 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:58.278 * Looking for test storage... 00:09:58.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.278 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:58.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.279 --rc genhtml_branch_coverage=1 00:09:58.279 --rc genhtml_function_coverage=1 00:09:58.279 --rc genhtml_legend=1 00:09:58.279 --rc geninfo_all_blocks=1 00:09:58.279 --rc geninfo_unexecuted_blocks=1 00:09:58.279 00:09:58.279 ' 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:58.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.279 --rc genhtml_branch_coverage=1 00:09:58.279 --rc genhtml_function_coverage=1 00:09:58.279 --rc genhtml_legend=1 00:09:58.279 --rc geninfo_all_blocks=1 00:09:58.279 --rc geninfo_unexecuted_blocks=1 00:09:58.279 00:09:58.279 ' 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:58.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.279 --rc genhtml_branch_coverage=1 00:09:58.279 --rc genhtml_function_coverage=1 00:09:58.279 --rc genhtml_legend=1 00:09:58.279 --rc geninfo_all_blocks=1 00:09:58.279 --rc geninfo_unexecuted_blocks=1 00:09:58.279 00:09:58.279 ' 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:58.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.279 --rc genhtml_branch_coverage=1 00:09:58.279 --rc genhtml_function_coverage=1 00:09:58.279 --rc genhtml_legend=1 00:09:58.279 --rc geninfo_all_blocks=1 00:09:58.279 --rc geninfo_unexecuted_blocks=1 00:09:58.279 00:09:58.279 ' 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.279 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.538 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:58.538 Cannot find device "nvmf_init_br" 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:58.538 Cannot find device "nvmf_init_br2" 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:58.538 Cannot find device "nvmf_tgt_br" 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.538 Cannot find device "nvmf_tgt_br2" 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:58.538 Cannot find device "nvmf_init_br" 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:58.538 Cannot find device "nvmf_init_br2" 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:58.538 Cannot find device "nvmf_tgt_br" 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:58.538 Cannot find device "nvmf_tgt_br2" 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:58.538 Cannot find device "nvmf_br" 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:58.538 Cannot find device "nvmf_init_if" 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:58.538 Cannot find device "nvmf_init_if2" 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.538 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.538 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:58.539 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:58.798 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:58.798 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:09:58.798 00:09:58.798 --- 10.0.0.3 ping statistics --- 00:09:58.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.798 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:58.798 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:58.798 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:09:58.798 00:09:58.798 --- 10.0.0.4 ping statistics --- 00:09:58.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.798 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:58.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:58.798 00:09:58.798 --- 10.0.0.1 ping statistics --- 00:09:58.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.798 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:58.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:09:58.798 00:09:58.798 --- 10.0.0.2 ping statistics --- 00:09:58.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.798 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=73533 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 73533 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 73533 ']' 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.798 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.799 12:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.799 [2024-12-05 12:13:29.645451] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:09:58.799 [2024-12-05 12:13:29.645542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.057 [2024-12-05 12:13:29.791408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.057 [2024-12-05 12:13:29.837475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.057 [2024-12-05 12:13:29.837534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.057 [2024-12-05 12:13:29.837543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.057 [2024-12-05 12:13:29.837550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.057 [2024-12-05 12:13:29.837556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.057 [2024-12-05 12:13:29.838626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.057 [2024-12-05 12:13:29.838747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.057 [2024-12-05 12:13:29.838836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.057 [2024-12-05 12:13:29.838837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:59.993 "poll_groups": [ 00:09:59.993 { 00:09:59.993 "admin_qpairs": 0, 00:09:59.993 "completed_nvme_io": 0, 00:09:59.993 "current_admin_qpairs": 0, 00:09:59.993 "current_io_qpairs": 0, 00:09:59.993 "io_qpairs": 0, 00:09:59.993 "name": "nvmf_tgt_poll_group_000", 00:09:59.993 "pending_bdev_io": 0, 00:09:59.993 "transports": [] 00:09:59.993 }, 00:09:59.993 { 00:09:59.993 "admin_qpairs": 0, 00:09:59.993 "completed_nvme_io": 0, 00:09:59.993 "current_admin_qpairs": 0, 00:09:59.993 "current_io_qpairs": 0, 00:09:59.993 "io_qpairs": 0, 00:09:59.993 "name": "nvmf_tgt_poll_group_001", 00:09:59.993 "pending_bdev_io": 0, 00:09:59.993 "transports": [] 00:09:59.993 }, 00:09:59.993 { 00:09:59.993 "admin_qpairs": 0, 00:09:59.993 "completed_nvme_io": 0, 00:09:59.993 "current_admin_qpairs": 0, 00:09:59.993 "current_io_qpairs": 0, 00:09:59.993 "io_qpairs": 0, 00:09:59.993 "name": "nvmf_tgt_poll_group_002", 00:09:59.993 "pending_bdev_io": 0, 00:09:59.993 "transports": [] 00:09:59.993 }, 00:09:59.993 { 00:09:59.993 "admin_qpairs": 0, 00:09:59.993 "completed_nvme_io": 0, 00:09:59.993 "current_admin_qpairs": 0, 00:09:59.993 "current_io_qpairs": 0, 00:09:59.993 "io_qpairs": 0, 00:09:59.993 "name": "nvmf_tgt_poll_group_003", 00:09:59.993 "pending_bdev_io": 0, 00:09:59.993 "transports": [] 00:09:59.993 } 00:09:59.993 ], 00:09:59.993 "tick_rate": 2200000000 00:09:59.993 }' 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:59.993 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.994 [2024-12-05 12:13:30.809273] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:59.994 "poll_groups": [ 00:09:59.994 { 00:09:59.994 "admin_qpairs": 0, 00:09:59.994 "completed_nvme_io": 0, 00:09:59.994 "current_admin_qpairs": 0, 00:09:59.994 "current_io_qpairs": 0, 00:09:59.994 "io_qpairs": 0, 00:09:59.994 "name": "nvmf_tgt_poll_group_000", 00:09:59.994 "pending_bdev_io": 0, 00:09:59.994 "transports": [ 00:09:59.994 { 00:09:59.994 "trtype": "TCP" 00:09:59.994 } 00:09:59.994 ] 00:09:59.994 }, 00:09:59.994 { 00:09:59.994 "admin_qpairs": 0, 00:09:59.994 "completed_nvme_io": 0, 00:09:59.994 "current_admin_qpairs": 0, 00:09:59.994 "current_io_qpairs": 0, 00:09:59.994 "io_qpairs": 0, 00:09:59.994 "name": "nvmf_tgt_poll_group_001", 00:09:59.994 "pending_bdev_io": 0, 00:09:59.994 "transports": [ 00:09:59.994 { 00:09:59.994 "trtype": "TCP" 00:09:59.994 } 00:09:59.994 ] 00:09:59.994 }, 00:09:59.994 { 00:09:59.994 "admin_qpairs": 0, 00:09:59.994 "completed_nvme_io": 0, 00:09:59.994 "current_admin_qpairs": 0, 00:09:59.994 "current_io_qpairs": 0, 00:09:59.994 "io_qpairs": 0, 00:09:59.994 "name": "nvmf_tgt_poll_group_002", 00:09:59.994 "pending_bdev_io": 0, 00:09:59.994 "transports": [ 00:09:59.994 { 00:09:59.994 "trtype": "TCP" 00:09:59.994 } 00:09:59.994 ] 00:09:59.994 }, 00:09:59.994 { 00:09:59.994 "admin_qpairs": 0, 00:09:59.994 "completed_nvme_io": 0, 00:09:59.994 "current_admin_qpairs": 0, 00:09:59.994 "current_io_qpairs": 0, 00:09:59.994 "io_qpairs": 0, 00:09:59.994 "name": "nvmf_tgt_poll_group_003", 00:09:59.994 "pending_bdev_io": 0, 00:09:59.994 "transports": [ 00:09:59.994 { 00:09:59.994 "trtype": "TCP" 00:09:59.994 } 00:09:59.994 ] 00:09:59.994 } 00:09:59.994 ], 00:09:59.994 "tick_rate": 2200000000 00:09:59.994 }' 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:59.994 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:00.252 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:00.252 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:00.252 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.253 Malloc1 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.253 12:13:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.253 [2024-12-05 12:13:31.030029] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -a 10.0.0.3 -s 4420 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -a 10.0.0.3 -s 4420 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -a 10.0.0.3 -s 4420 00:10:00.253 [2024-12-05 12:13:31.058567] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3' 00:10:00.253 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:00.253 could not add new controller: failed to write to nvme-fabrics device 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.253 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:00.511 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.511 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:00.511 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.511 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:00.511 12:13:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:02.414 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:02.414 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:02.414 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.414 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:02.414 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.414 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:02.414 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:02.674 [2024-12-05 12:13:33.380813] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3' 00:10:02.674 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:02.674 could not add new controller: failed to write to nvme-fabrics device 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.674 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:02.933 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:02.933 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:02.933 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.933 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:02.933 12:13:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.836 [2024-12-05 12:13:35.697943] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.836 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.094 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.094 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.094 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.094 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.094 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.094 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:05.094 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:05.094 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:05.094 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:05.094 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:05.094 12:13:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:07.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.630 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.630 [2024-12-05 12:13:38.007854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:07.630 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:07.631 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.631 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:07.631 12:13:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.602 [2024-12-05 12:13:40.318839] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.602 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:09.861 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:09.861 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:09.861 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:09.861 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:09.861 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.768 [2024-12-05 12:13:42.624254] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.768 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.028 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.028 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:12.028 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.028 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.028 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.028 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:12.028 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:12.028 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:12.028 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.028 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:12.028 12:13:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.559 [2024-12-05 12:13:44.934079] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.559 12:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:14.559 12:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.559 12:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:14.559 12:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.559 12:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:14.559 12:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.462 [2024-12-05 12:13:47.259556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.462 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.463 [2024-12-05 12:13:47.307549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:16.463 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.463 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.463 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.463 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.463 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.463 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:16.463 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.463 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.463 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.463 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.463 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.463 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 [2024-12-05 12:13:47.355623] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 [2024-12-05 12:13:47.403722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.722 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.723 [2024-12-05 12:13:47.451822] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:16.723 "poll_groups": [ 00:10:16.723 { 00:10:16.723 "admin_qpairs": 2, 00:10:16.723 "completed_nvme_io": 66, 00:10:16.723 "current_admin_qpairs": 0, 00:10:16.723 "current_io_qpairs": 0, 00:10:16.723 "io_qpairs": 16, 00:10:16.723 "name": "nvmf_tgt_poll_group_000", 00:10:16.723 "pending_bdev_io": 0, 00:10:16.723 "transports": [ 00:10:16.723 { 00:10:16.723 "trtype": "TCP" 00:10:16.723 } 00:10:16.723 ] 00:10:16.723 }, 00:10:16.723 { 00:10:16.723 "admin_qpairs": 3, 00:10:16.723 "completed_nvme_io": 118, 00:10:16.723 "current_admin_qpairs": 0, 00:10:16.723 "current_io_qpairs": 0, 00:10:16.723 "io_qpairs": 17, 00:10:16.723 "name": "nvmf_tgt_poll_group_001", 00:10:16.723 "pending_bdev_io": 0, 00:10:16.723 "transports": [ 00:10:16.723 { 00:10:16.723 "trtype": "TCP" 00:10:16.723 } 00:10:16.723 ] 00:10:16.723 }, 00:10:16.723 { 00:10:16.723 "admin_qpairs": 1, 00:10:16.723 "completed_nvme_io": 69, 00:10:16.723 "current_admin_qpairs": 0, 00:10:16.723 "current_io_qpairs": 0, 00:10:16.723 "io_qpairs": 19, 00:10:16.723 "name": "nvmf_tgt_poll_group_002", 00:10:16.723 "pending_bdev_io": 0, 00:10:16.723 "transports": [ 00:10:16.723 { 00:10:16.723 "trtype": "TCP" 00:10:16.723 } 00:10:16.723 ] 00:10:16.723 }, 00:10:16.723 { 00:10:16.723 "admin_qpairs": 1, 00:10:16.723 "completed_nvme_io": 167, 00:10:16.723 "current_admin_qpairs": 0, 00:10:16.723 "current_io_qpairs": 0, 00:10:16.723 "io_qpairs": 18, 00:10:16.723 "name": "nvmf_tgt_poll_group_003", 00:10:16.723 "pending_bdev_io": 0, 00:10:16.723 "transports": [ 00:10:16.723 { 00:10:16.723 "trtype": "TCP" 00:10:16.723 } 00:10:16.723 ] 00:10:16.723 } 00:10:16.723 ], 00:10:16.723 "tick_rate": 2200000000 00:10:16.723 }' 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:16.723 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.982 rmmod nvme_tcp 00:10:16.982 rmmod nvme_fabrics 00:10:16.982 rmmod nvme_keyring 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 73533 ']' 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 73533 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 73533 ']' 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 73533 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73533 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73533' 00:10:16.982 killing process with pid 73533 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 73533 00:10:16.982 12:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 73533 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:17.241 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:10:17.500 00:10:17.500 real 0m19.328s 00:10:17.500 user 1m11.987s 00:10:17.500 sys 0m2.348s 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.500 ************************************ 00:10:17.500 END TEST nvmf_rpc 00:10:17.500 ************************************ 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:17.500 ************************************ 00:10:17.500 START TEST nvmf_invalid 00:10:17.500 ************************************ 00:10:17.500 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:17.760 * Looking for test storage... 00:10:17.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:10:17.760 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.761 --rc genhtml_branch_coverage=1 00:10:17.761 --rc genhtml_function_coverage=1 00:10:17.761 --rc genhtml_legend=1 00:10:17.761 --rc geninfo_all_blocks=1 00:10:17.761 --rc geninfo_unexecuted_blocks=1 00:10:17.761 00:10:17.761 ' 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.761 --rc genhtml_branch_coverage=1 00:10:17.761 --rc genhtml_function_coverage=1 00:10:17.761 --rc genhtml_legend=1 00:10:17.761 --rc geninfo_all_blocks=1 00:10:17.761 --rc geninfo_unexecuted_blocks=1 00:10:17.761 00:10:17.761 ' 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.761 --rc genhtml_branch_coverage=1 00:10:17.761 --rc genhtml_function_coverage=1 00:10:17.761 --rc genhtml_legend=1 00:10:17.761 --rc geninfo_all_blocks=1 00:10:17.761 --rc geninfo_unexecuted_blocks=1 00:10:17.761 00:10:17.761 ' 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.761 --rc genhtml_branch_coverage=1 00:10:17.761 --rc genhtml_function_coverage=1 00:10:17.761 --rc genhtml_legend=1 00:10:17.761 --rc geninfo_all_blocks=1 00:10:17.761 --rc geninfo_unexecuted_blocks=1 00:10:17.761 00:10:17.761 ' 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.761 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:17.761 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:17.762 Cannot find device "nvmf_init_br" 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:17.762 Cannot find device "nvmf_init_br2" 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:17.762 Cannot find device "nvmf_tgt_br" 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:17.762 Cannot find device "nvmf_tgt_br2" 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:17.762 Cannot find device "nvmf_init_br" 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:17.762 Cannot find device "nvmf_init_br2" 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:10:17.762 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:18.020 Cannot find device "nvmf_tgt_br" 00:10:18.020 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:18.021 Cannot find device "nvmf_tgt_br2" 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:18.021 Cannot find device "nvmf_br" 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:18.021 Cannot find device "nvmf_init_if" 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:18.021 Cannot find device "nvmf_init_if2" 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:18.021 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:18.280 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.280 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:10:18.280 00:10:18.280 --- 10.0.0.3 ping statistics --- 00:10:18.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.280 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:18.280 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:18.280 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:10:18.280 00:10:18.280 --- 10.0.0.4 ping statistics --- 00:10:18.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.280 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:18.280 00:10:18.280 --- 10.0.0.1 ping statistics --- 00:10:18.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.280 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:18.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:10:18.280 00:10:18.280 --- 10.0.0.2 ping statistics --- 00:10:18.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.280 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=74096 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 74096 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 74096 ']' 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.280 12:13:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:18.280 [2024-12-05 12:13:49.012014] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:10:18.280 [2024-12-05 12:13:49.012100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.542 [2024-12-05 12:13:49.166302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.542 [2024-12-05 12:13:49.243856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.542 [2024-12-05 12:13:49.243947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.542 [2024-12-05 12:13:49.243961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.542 [2024-12-05 12:13:49.243972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.542 [2024-12-05 12:13:49.243986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.542 [2024-12-05 12:13:49.245646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.542 [2024-12-05 12:13:49.245754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.542 [2024-12-05 12:13:49.245895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.542 [2024-12-05 12:13:49.245906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.801 12:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.801 12:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:10:18.801 12:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.801 12:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.801 12:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:18.801 12:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.801 12:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:18.801 12:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15545 00:10:19.060 [2024-12-05 12:13:49.782861] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:19.060 12:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/05 12:13:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15545 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:10:19.060 request: 00:10:19.060 { 00:10:19.060 "method": "nvmf_create_subsystem", 00:10:19.060 "params": { 00:10:19.060 "nqn": "nqn.2016-06.io.spdk:cnode15545", 00:10:19.060 "tgt_name": "foobar" 00:10:19.060 } 00:10:19.060 } 00:10:19.060 Got JSON-RPC error response 00:10:19.060 GoRPCClient: error on JSON-RPC call' 00:10:19.060 12:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/05 12:13:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15545 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:10:19.060 request: 00:10:19.060 { 00:10:19.060 "method": "nvmf_create_subsystem", 00:10:19.060 "params": { 00:10:19.060 "nqn": "nqn.2016-06.io.spdk:cnode15545", 00:10:19.060 "tgt_name": "foobar" 00:10:19.060 } 00:10:19.060 } 00:10:19.060 Got JSON-RPC error response 00:10:19.060 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:19.060 12:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:19.060 12:13:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22028 00:10:19.318 [2024-12-05 12:13:50.055391] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22028: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:19.318 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/05 12:13:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22028 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:10:19.318 request: 00:10:19.318 { 00:10:19.318 "method": "nvmf_create_subsystem", 00:10:19.318 "params": { 00:10:19.318 "nqn": "nqn.2016-06.io.spdk:cnode22028", 00:10:19.318 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:10:19.318 } 00:10:19.318 } 00:10:19.318 Got JSON-RPC error response 00:10:19.318 GoRPCClient: error on JSON-RPC call' 00:10:19.318 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/05 12:13:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode22028 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:10:19.318 request: 00:10:19.318 { 00:10:19.318 "method": "nvmf_create_subsystem", 00:10:19.318 "params": { 00:10:19.318 "nqn": "nqn.2016-06.io.spdk:cnode22028", 00:10:19.318 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:10:19.318 } 00:10:19.318 } 00:10:19.318 Got JSON-RPC error response 00:10:19.318 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:19.318 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:19.318 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3040 00:10:19.577 [2024-12-05 12:13:50.400077] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3040: invalid model number 'SPDK_Controller' 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/05 12:13:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode3040], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:10:19.577 request: 00:10:19.577 { 00:10:19.577 "method": "nvmf_create_subsystem", 00:10:19.577 "params": { 00:10:19.577 "nqn": "nqn.2016-06.io.spdk:cnode3040", 00:10:19.577 "model_number": "SPDK_Controller\u001f" 00:10:19.577 } 00:10:19.577 } 00:10:19.577 Got JSON-RPC error response 00:10:19.577 GoRPCClient: error on JSON-RPC call' 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/05 12:13:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode3040], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:10:19.577 request: 00:10:19.577 { 00:10:19.577 "method": "nvmf_create_subsystem", 00:10:19.577 "params": { 00:10:19.577 "nqn": "nqn.2016-06.io.spdk:cnode3040", 00:10:19.577 "model_number": "SPDK_Controller\u001f" 00:10:19.577 } 00:10:19.577 } 00:10:19.577 Got JSON-RPC error response 00:10:19.577 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.577 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:19.837 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ G == \- ]] 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'G$R+G6o-:|k9W>S!h0d~u' 00:10:19.838 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'G$R+G6o-:|k9W>S!h0d~u' nqn.2016-06.io.spdk:cnode15071 00:10:20.097 [2024-12-05 12:13:50.880925] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15071: invalid serial number 'G$R+G6o-:|k9W>S!h0d~u' 00:10:20.097 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/05 12:13:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15071 serial_number:G$R+G6o-:|k9W>S!h0d~u], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN G$R+G6o-:|k9W>S!h0d~u 00:10:20.097 request: 00:10:20.097 { 00:10:20.097 "method": "nvmf_create_subsystem", 00:10:20.097 "params": { 00:10:20.097 "nqn": "nqn.2016-06.io.spdk:cnode15071", 00:10:20.097 "serial_number": "G$R+G6o-:|k9W>S!h0d~u" 00:10:20.097 } 00:10:20.097 } 00:10:20.097 Got JSON-RPC error response 00:10:20.097 GoRPCClient: error on JSON-RPC call' 00:10:20.097 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/05 12:13:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15071 serial_number:G$R+G6o-:|k9W>S!h0d~u], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN G$R+G6o-:|k9W>S!h0d~u 00:10:20.097 request: 00:10:20.097 { 00:10:20.097 "method": "nvmf_create_subsystem", 00:10:20.097 "params": { 00:10:20.097 "nqn": "nqn.2016-06.io.spdk:cnode15071", 00:10:20.097 "serial_number": "G$R+G6o-:|k9W>S!h0d~u" 00:10:20.097 } 00:10:20.097 } 00:10:20.097 Got JSON-RPC error response 00:10:20.097 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:20.097 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:20.097 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.098 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.357 12:13:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:20.357 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:20.357 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:20.357 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.357 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.357 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:20.357 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:20.357 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:20.357 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.357 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.357 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:20.358 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:10:20.359 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'nOhJ$GOMu58%zS[OM!;j7gze, d7@B$tuvc]yHj`H' 00:10:20.359 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'nOhJ$GOMu58%zS[OM!;j7gze, d7@B$tuvc]yHj`H' nqn.2016-06.io.spdk:cnode12002 00:10:20.616 [2024-12-05 12:13:51.473934] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12002: invalid model number 'nOhJ$GOMu58%zS[OM!;j7gze, d7@B$tuvc]yHj`H' 00:10:20.874 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/12/05 12:13:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:nOhJ$GOMu58%zS[OM!;j7gze, d7@B$tuvc]yHj`H nqn:nqn.2016-06.io.spdk:cnode12002], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN nOhJ$GOMu58%zS[OM!;j7gze, d7@B$tuvc]yHj`H 00:10:20.874 request: 00:10:20.874 { 00:10:20.874 "method": "nvmf_create_subsystem", 00:10:20.874 "params": { 00:10:20.874 "nqn": "nqn.2016-06.io.spdk:cnode12002", 00:10:20.874 "model_number": "nOhJ$GOMu58%zS[OM!;j7gze, d7@B$tuvc]yHj`H" 00:10:20.874 } 00:10:20.874 } 00:10:20.874 Got JSON-RPC error response 00:10:20.874 GoRPCClient: error on JSON-RPC call' 00:10:20.874 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/12/05 12:13:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:nOhJ$GOMu58%zS[OM!;j7gze, d7@B$tuvc]yHj`H nqn:nqn.2016-06.io.spdk:cnode12002], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN nOhJ$GOMu58%zS[OM!;j7gze, d7@B$tuvc]yHj`H 00:10:20.874 request: 00:10:20.874 { 00:10:20.874 "method": "nvmf_create_subsystem", 00:10:20.874 "params": { 00:10:20.874 "nqn": "nqn.2016-06.io.spdk:cnode12002", 00:10:20.874 "model_number": "nOhJ$GOMu58%zS[OM!;j7gze, d7@B$tuvc]yHj`H" 00:10:20.874 } 00:10:20.874 } 00:10:20.875 Got JSON-RPC error response 00:10:20.875 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:20.875 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:21.132 [2024-12-05 12:13:51.826491] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.132 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:21.391 12:13:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:21.391 12:13:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:21.391 12:13:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:21.391 12:13:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:21.391 12:13:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:21.663 [2024-12-05 12:13:52.501541] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:21.949 12:13:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/12/05 12:13:52 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:10:21.949 request: 00:10:21.949 { 00:10:21.949 "method": "nvmf_subsystem_remove_listener", 00:10:21.949 "params": { 00:10:21.949 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:21.949 "listen_address": { 00:10:21.949 "trtype": "tcp", 00:10:21.949 "traddr": "", 00:10:21.949 "trsvcid": "4421" 00:10:21.949 } 00:10:21.949 } 00:10:21.949 } 00:10:21.949 Got JSON-RPC error response 00:10:21.949 GoRPCClient: error on JSON-RPC call' 00:10:21.949 12:13:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/12/05 12:13:52 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:10:21.949 request: 00:10:21.949 { 00:10:21.949 "method": "nvmf_subsystem_remove_listener", 00:10:21.949 "params": { 00:10:21.949 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:21.949 "listen_address": { 00:10:21.949 "trtype": "tcp", 00:10:21.949 "traddr": "", 00:10:21.949 "trsvcid": "4421" 00:10:21.949 } 00:10:21.949 } 00:10:21.949 } 00:10:21.949 Got JSON-RPC error response 00:10:21.949 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:21.949 12:13:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19855 -i 0 00:10:21.949 [2024-12-05 12:13:52.757843] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19855: invalid cntlid range [0-65519] 00:10:21.949 12:13:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/12/05 12:13:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode19855], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:10:21.949 request: 00:10:21.949 { 00:10:21.949 "method": "nvmf_create_subsystem", 00:10:21.949 "params": { 00:10:21.949 "nqn": "nqn.2016-06.io.spdk:cnode19855", 00:10:21.949 "min_cntlid": 0 00:10:21.949 } 00:10:21.949 } 00:10:21.949 Got JSON-RPC error response 00:10:21.949 GoRPCClient: error on JSON-RPC call' 00:10:21.949 12:13:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/12/05 12:13:52 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode19855], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:10:21.949 request: 00:10:21.949 { 00:10:21.949 "method": "nvmf_create_subsystem", 00:10:21.949 "params": { 00:10:21.949 "nqn": "nqn.2016-06.io.spdk:cnode19855", 00:10:21.949 "min_cntlid": 0 00:10:21.949 } 00:10:21.949 } 00:10:21.949 Got JSON-RPC error response 00:10:21.949 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:21.949 12:13:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16458 -i 65520 00:10:22.208 [2024-12-05 12:13:53.026165] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16458: invalid cntlid range [65520-65519] 00:10:22.208 12:13:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/12/05 12:13:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16458], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:10:22.208 request: 00:10:22.208 { 00:10:22.208 "method": "nvmf_create_subsystem", 00:10:22.208 "params": { 00:10:22.208 "nqn": "nqn.2016-06.io.spdk:cnode16458", 00:10:22.208 "min_cntlid": 65520 00:10:22.208 } 00:10:22.208 } 00:10:22.208 Got JSON-RPC error response 00:10:22.208 GoRPCClient: error on JSON-RPC call' 00:10:22.208 12:13:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/12/05 12:13:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode16458], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:10:22.208 request: 00:10:22.208 { 00:10:22.208 "method": "nvmf_create_subsystem", 00:10:22.208 "params": { 00:10:22.208 "nqn": "nqn.2016-06.io.spdk:cnode16458", 00:10:22.208 "min_cntlid": 65520 00:10:22.208 } 00:10:22.208 } 00:10:22.208 Got JSON-RPC error response 00:10:22.208 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:22.208 12:13:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14395 -I 0 00:10:22.774 [2024-12-05 12:13:53.366770] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14395: invalid cntlid range [1-0] 00:10:22.775 12:13:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/12/05 12:13:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14395], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:10:22.775 request: 00:10:22.775 { 00:10:22.775 "method": "nvmf_create_subsystem", 00:10:22.775 "params": { 00:10:22.775 "nqn": "nqn.2016-06.io.spdk:cnode14395", 00:10:22.775 "max_cntlid": 0 00:10:22.775 } 00:10:22.775 } 00:10:22.775 Got JSON-RPC error response 00:10:22.775 GoRPCClient: error on JSON-RPC call' 00:10:22.775 12:13:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/12/05 12:13:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14395], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:10:22.775 request: 00:10:22.775 { 00:10:22.775 "method": "nvmf_create_subsystem", 00:10:22.775 "params": { 00:10:22.775 "nqn": "nqn.2016-06.io.spdk:cnode14395", 00:10:22.775 "max_cntlid": 0 00:10:22.775 } 00:10:22.775 } 00:10:22.775 Got JSON-RPC error response 00:10:22.775 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:22.775 12:13:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13722 -I 65520 00:10:23.034 [2024-12-05 12:13:53.707168] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13722: invalid cntlid range [1-65520] 00:10:23.034 12:13:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/12/05 12:13:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13722], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:10:23.034 request: 00:10:23.034 { 00:10:23.034 "method": "nvmf_create_subsystem", 00:10:23.034 "params": { 00:10:23.034 "nqn": "nqn.2016-06.io.spdk:cnode13722", 00:10:23.034 "max_cntlid": 65520 00:10:23.034 } 00:10:23.034 } 00:10:23.034 Got JSON-RPC error response 00:10:23.034 GoRPCClient: error on JSON-RPC call' 00:10:23.034 12:13:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/12/05 12:13:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13722], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:10:23.034 request: 00:10:23.034 { 00:10:23.034 "method": "nvmf_create_subsystem", 00:10:23.034 "params": { 00:10:23.034 "nqn": "nqn.2016-06.io.spdk:cnode13722", 00:10:23.034 "max_cntlid": 65520 00:10:23.034 } 00:10:23.034 } 00:10:23.034 Got JSON-RPC error response 00:10:23.034 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:23.034 12:13:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17700 -i 6 -I 5 00:10:23.293 [2024-12-05 12:13:54.043625] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17700: invalid cntlid range [6-5] 00:10:23.293 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/12/05 12:13:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode17700], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:10:23.293 request: 00:10:23.293 { 00:10:23.293 "method": "nvmf_create_subsystem", 00:10:23.293 "params": { 00:10:23.293 "nqn": "nqn.2016-06.io.spdk:cnode17700", 00:10:23.293 "min_cntlid": 6, 00:10:23.293 "max_cntlid": 5 00:10:23.293 } 00:10:23.293 } 00:10:23.293 Got JSON-RPC error response 00:10:23.293 GoRPCClient: error on JSON-RPC call' 00:10:23.293 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/12/05 12:13:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode17700], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:10:23.293 request: 00:10:23.293 { 00:10:23.293 "method": "nvmf_create_subsystem", 00:10:23.293 "params": { 00:10:23.293 "nqn": "nqn.2016-06.io.spdk:cnode17700", 00:10:23.293 "min_cntlid": 6, 00:10:23.293 "max_cntlid": 5 00:10:23.293 } 00:10:23.293 } 00:10:23.293 Got JSON-RPC error response 00:10:23.293 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:23.293 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:23.552 { 00:10:23.552 "name": "foobar", 00:10:23.552 "method": "nvmf_delete_target", 00:10:23.552 "req_id": 1 00:10:23.552 } 00:10:23.552 Got JSON-RPC error response 00:10:23.552 response: 00:10:23.552 { 00:10:23.552 "code": -32602, 00:10:23.552 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:23.552 }' 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:23.552 { 00:10:23.552 "name": "foobar", 00:10:23.552 "method": "nvmf_delete_target", 00:10:23.552 "req_id": 1 00:10:23.552 } 00:10:23.552 Got JSON-RPC error response 00:10:23.552 response: 00:10:23.552 { 00:10:23.552 "code": -32602, 00:10:23.552 "message": "The specified target doesn't exist, cannot delete it." 00:10:23.552 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:23.552 rmmod nvme_tcp 00:10:23.552 rmmod nvme_fabrics 00:10:23.552 rmmod nvme_keyring 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 74096 ']' 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 74096 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 74096 ']' 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 74096 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74096 00:10:23.552 killing process with pid 74096 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74096' 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 74096 00:10:23.552 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 74096 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:23.810 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:10:24.067 00:10:24.067 real 0m6.481s 00:10:24.067 user 0m24.932s 00:10:24.067 sys 0m1.631s 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:24.067 ************************************ 00:10:24.067 END TEST nvmf_invalid 00:10:24.067 ************************************ 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:24.067 ************************************ 00:10:24.067 START TEST nvmf_connect_stress 00:10:24.067 ************************************ 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:24.067 * Looking for test storage... 00:10:24.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:10:24.067 12:13:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:24.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.326 --rc genhtml_branch_coverage=1 00:10:24.326 --rc genhtml_function_coverage=1 00:10:24.326 --rc genhtml_legend=1 00:10:24.326 --rc geninfo_all_blocks=1 00:10:24.326 --rc geninfo_unexecuted_blocks=1 00:10:24.326 00:10:24.326 ' 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:24.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.326 --rc genhtml_branch_coverage=1 00:10:24.326 --rc genhtml_function_coverage=1 00:10:24.326 --rc genhtml_legend=1 00:10:24.326 --rc geninfo_all_blocks=1 00:10:24.326 --rc geninfo_unexecuted_blocks=1 00:10:24.326 00:10:24.326 ' 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:24.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.326 --rc genhtml_branch_coverage=1 00:10:24.326 --rc genhtml_function_coverage=1 00:10:24.326 --rc genhtml_legend=1 00:10:24.326 --rc geninfo_all_blocks=1 00:10:24.326 --rc geninfo_unexecuted_blocks=1 00:10:24.326 00:10:24.326 ' 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:24.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.326 --rc genhtml_branch_coverage=1 00:10:24.326 --rc genhtml_function_coverage=1 00:10:24.326 --rc genhtml_legend=1 00:10:24.326 --rc geninfo_all_blocks=1 00:10:24.326 --rc geninfo_unexecuted_blocks=1 00:10:24.326 00:10:24.326 ' 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.326 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.327 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:24.327 Cannot find device "nvmf_init_br" 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:24.327 Cannot find device "nvmf_init_br2" 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:24.327 Cannot find device "nvmf_tgt_br" 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:24.327 Cannot find device "nvmf_tgt_br2" 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:24.327 Cannot find device "nvmf_init_br" 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:24.327 Cannot find device "nvmf_init_br2" 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:24.327 Cannot find device "nvmf_tgt_br" 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:24.327 Cannot find device "nvmf_tgt_br2" 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:24.327 Cannot find device "nvmf_br" 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:24.327 Cannot find device "nvmf_init_if" 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:24.327 Cannot find device "nvmf_init_if2" 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:24.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:10:24.327 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:24.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:24.585 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:24.586 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:24.586 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:10:24.586 00:10:24.586 --- 10.0.0.3 ping statistics --- 00:10:24.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.586 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:24.586 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:24.586 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:10:24.586 00:10:24.586 --- 10.0.0.4 ping statistics --- 00:10:24.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.586 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:24.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:24.586 00:10:24.586 --- 10.0.0.1 ping statistics --- 00:10:24.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.586 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:24.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:10:24.586 00:10:24.586 --- 10.0.0.2 ping statistics --- 00:10:24.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.586 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=74663 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:24.586 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 74663 00:10:24.843 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 74663 ']' 00:10:24.843 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.843 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.844 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.844 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.844 12:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.844 [2024-12-05 12:13:55.514515] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:10:24.844 [2024-12-05 12:13:55.514619] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.844 [2024-12-05 12:13:55.669574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:25.101 [2024-12-05 12:13:55.731674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.101 [2024-12-05 12:13:55.731730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.101 [2024-12-05 12:13:55.731744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.101 [2024-12-05 12:13:55.731755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.101 [2024-12-05 12:13:55.731764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.101 [2024-12-05 12:13:55.733390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.101 [2024-12-05 12:13:55.733454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.101 [2024-12-05 12:13:55.733462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.038 [2024-12-05 12:13:56.598773] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.038 [2024-12-05 12:13:56.618963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.038 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.039 NULL1 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=74715 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.039 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.298 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.298 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:26.298 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.298 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.298 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.556 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.556 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:26.556 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.556 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.557 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.814 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.815 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:26.815 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.815 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.815 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.381 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.381 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:27.381 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.381 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.381 12:13:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.640 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.640 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:27.640 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.640 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.640 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.898 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.898 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:27.898 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.898 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.898 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.158 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.158 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:28.158 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.158 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.158 12:13:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.726 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.726 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:28.726 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.726 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.726 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.983 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.983 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:28.983 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.983 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.983 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.241 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.241 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:29.241 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.241 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.241 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.499 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.500 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:29.500 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.500 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.500 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.758 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.758 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:29.758 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.758 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.758 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.324 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.324 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:30.324 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.324 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.324 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.582 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.582 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:30.582 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.582 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.582 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:30.840 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.840 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:30.841 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:30.841 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.841 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.100 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.100 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:31.100 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.100 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.100 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.358 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.358 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:31.358 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.358 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.358 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.925 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.925 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:31.925 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:31.925 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.925 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.184 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.184 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:32.184 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.184 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.184 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.442 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.442 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:32.442 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.442 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.442 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.701 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.701 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:32.701 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.701 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.701 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:32.960 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.960 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:32.960 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:32.960 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.960 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.527 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.527 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:33.527 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.527 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.527 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.785 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.785 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:33.785 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:33.785 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.785 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.050 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.050 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:34.050 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.050 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.050 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.309 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.309 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:34.309 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.309 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.309 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:34.567 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.567 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:34.567 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:34.567 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.567 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.131 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.132 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:35.132 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.132 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.132 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.388 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.388 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:35.388 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.388 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.388 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.645 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.645 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:35.645 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.645 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.645 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.902 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.902 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:35.902 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:35.902 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.902 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.159 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:36.159 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.159 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74715 00:10:36.159 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (74715) - No such process 00:10:36.159 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 74715 00:10:36.159 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:10:36.417 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:36.417 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:36.417 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.417 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:10:36.417 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.417 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:10:36.417 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.417 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.417 rmmod nvme_tcp 00:10:36.417 rmmod nvme_fabrics 00:10:36.417 rmmod nvme_keyring 00:10:36.417 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.417 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:10:36.417 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:10:36.418 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 74663 ']' 00:10:36.418 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 74663 00:10:36.418 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 74663 ']' 00:10:36.418 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 74663 00:10:36.418 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:10:36.418 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.418 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74663 00:10:36.418 killing process with pid 74663 00:10:36.418 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:36.418 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:36.418 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74663' 00:10:36.418 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 74663 00:10:36.418 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 74663 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:36.677 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:10:36.936 00:10:36.936 real 0m12.765s 00:10:36.936 user 0m41.410s 00:10:36.936 sys 0m3.512s 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.936 ************************************ 00:10:36.936 END TEST nvmf_connect_stress 00:10:36.936 ************************************ 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:36.936 ************************************ 00:10:36.936 START TEST nvmf_fused_ordering 00:10:36.936 ************************************ 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:36.936 * Looking for test storage... 00:10:36.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:36.936 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.195 --rc genhtml_branch_coverage=1 00:10:37.195 --rc genhtml_function_coverage=1 00:10:37.195 --rc genhtml_legend=1 00:10:37.195 --rc geninfo_all_blocks=1 00:10:37.195 --rc geninfo_unexecuted_blocks=1 00:10:37.195 00:10:37.195 ' 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.195 --rc genhtml_branch_coverage=1 00:10:37.195 --rc genhtml_function_coverage=1 00:10:37.195 --rc genhtml_legend=1 00:10:37.195 --rc geninfo_all_blocks=1 00:10:37.195 --rc geninfo_unexecuted_blocks=1 00:10:37.195 00:10:37.195 ' 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.195 --rc genhtml_branch_coverage=1 00:10:37.195 --rc genhtml_function_coverage=1 00:10:37.195 --rc genhtml_legend=1 00:10:37.195 --rc geninfo_all_blocks=1 00:10:37.195 --rc geninfo_unexecuted_blocks=1 00:10:37.195 00:10:37.195 ' 00:10:37.195 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.195 --rc genhtml_branch_coverage=1 00:10:37.195 --rc genhtml_function_coverage=1 00:10:37.195 --rc genhtml_legend=1 00:10:37.195 --rc geninfo_all_blocks=1 00:10:37.195 --rc geninfo_unexecuted_blocks=1 00:10:37.195 00:10:37.195 ' 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.196 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:37.196 Cannot find device "nvmf_init_br" 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:37.196 Cannot find device "nvmf_init_br2" 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:37.196 Cannot find device "nvmf_tgt_br" 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.196 Cannot find device "nvmf_tgt_br2" 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:37.196 Cannot find device "nvmf_init_br" 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:37.196 Cannot find device "nvmf_init_br2" 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:37.196 Cannot find device "nvmf_tgt_br" 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:37.196 Cannot find device "nvmf_tgt_br2" 00:10:37.196 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:10:37.197 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:37.197 Cannot find device "nvmf_br" 00:10:37.197 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:10:37.197 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:37.197 Cannot find device "nvmf_init_if" 00:10:37.197 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:10:37.197 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:37.197 Cannot find device "nvmf_init_if2" 00:10:37.197 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:10:37.197 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:37.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.197 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:10:37.197 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:37.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:37.197 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:10:37.197 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:37.197 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:37.197 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:37.197 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:37.197 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:37.456 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:37.457 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:37.457 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:10:37.457 00:10:37.457 --- 10.0.0.3 ping statistics --- 00:10:37.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.457 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:37.457 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:37.457 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:10:37.457 00:10:37.457 --- 10.0.0.4 ping statistics --- 00:10:37.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.457 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:37.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:10:37.457 00:10:37.457 --- 10.0.0.1 ping statistics --- 00:10:37.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.457 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:37.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:10:37.457 00:10:37.457 --- 10.0.0.2 ping statistics --- 00:10:37.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.457 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=75095 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 75095 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 75095 ']' 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.457 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.716 [2024-12-05 12:14:08.370445] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:10:37.716 [2024-12-05 12:14:08.370528] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.716 [2024-12-05 12:14:08.521194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.716 [2024-12-05 12:14:08.579531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.716 [2024-12-05 12:14:08.579586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.716 [2024-12-05 12:14:08.579605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.716 [2024-12-05 12:14:08.579616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.716 [2024-12-05 12:14:08.579626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.716 [2024-12-05 12:14:08.580100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.975 [2024-12-05 12:14:08.763266] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.975 [2024-12-05 12:14:08.779433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.975 NULL1 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.975 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:37.975 [2024-12-05 12:14:08.833160] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:10:37.975 [2024-12-05 12:14:08.833209] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75132 ] 00:10:38.544 Attached to nqn.2016-06.io.spdk:cnode1 00:10:38.544 Namespace ID: 1 size: 1GB 00:10:38.544 fused_ordering(0) 00:10:38.544 fused_ordering(1) 00:10:38.544 fused_ordering(2) 00:10:38.544 fused_ordering(3) 00:10:38.544 fused_ordering(4) 00:10:38.544 fused_ordering(5) 00:10:38.544 fused_ordering(6) 00:10:38.544 fused_ordering(7) 00:10:38.544 fused_ordering(8) 00:10:38.544 fused_ordering(9) 00:10:38.544 fused_ordering(10) 00:10:38.544 fused_ordering(11) 00:10:38.544 fused_ordering(12) 00:10:38.544 fused_ordering(13) 00:10:38.544 fused_ordering(14) 00:10:38.544 fused_ordering(15) 00:10:38.544 fused_ordering(16) 00:10:38.544 fused_ordering(17) 00:10:38.544 fused_ordering(18) 00:10:38.544 fused_ordering(19) 00:10:38.544 fused_ordering(20) 00:10:38.544 fused_ordering(21) 00:10:38.544 fused_ordering(22) 00:10:38.544 fused_ordering(23) 00:10:38.544 fused_ordering(24) 00:10:38.544 fused_ordering(25) 00:10:38.544 fused_ordering(26) 00:10:38.544 fused_ordering(27) 00:10:38.544 fused_ordering(28) 00:10:38.544 fused_ordering(29) 00:10:38.544 fused_ordering(30) 00:10:38.544 fused_ordering(31) 00:10:38.544 fused_ordering(32) 00:10:38.544 fused_ordering(33) 00:10:38.544 fused_ordering(34) 00:10:38.544 fused_ordering(35) 00:10:38.544 fused_ordering(36) 00:10:38.544 fused_ordering(37) 00:10:38.544 fused_ordering(38) 00:10:38.544 fused_ordering(39) 00:10:38.544 fused_ordering(40) 00:10:38.544 fused_ordering(41) 00:10:38.544 fused_ordering(42) 00:10:38.544 fused_ordering(43) 00:10:38.544 fused_ordering(44) 00:10:38.544 fused_ordering(45) 00:10:38.544 fused_ordering(46) 00:10:38.544 fused_ordering(47) 00:10:38.544 fused_ordering(48) 00:10:38.544 fused_ordering(49) 00:10:38.544 fused_ordering(50) 00:10:38.544 fused_ordering(51) 00:10:38.544 fused_ordering(52) 00:10:38.544 fused_ordering(53) 00:10:38.544 fused_ordering(54) 00:10:38.544 fused_ordering(55) 00:10:38.544 fused_ordering(56) 00:10:38.544 fused_ordering(57) 00:10:38.544 fused_ordering(58) 00:10:38.544 fused_ordering(59) 00:10:38.544 fused_ordering(60) 00:10:38.544 fused_ordering(61) 00:10:38.544 fused_ordering(62) 00:10:38.544 fused_ordering(63) 00:10:38.544 fused_ordering(64) 00:10:38.544 fused_ordering(65) 00:10:38.544 fused_ordering(66) 00:10:38.544 fused_ordering(67) 00:10:38.544 fused_ordering(68) 00:10:38.544 fused_ordering(69) 00:10:38.544 fused_ordering(70) 00:10:38.544 fused_ordering(71) 00:10:38.544 fused_ordering(72) 00:10:38.544 fused_ordering(73) 00:10:38.544 fused_ordering(74) 00:10:38.544 fused_ordering(75) 00:10:38.544 fused_ordering(76) 00:10:38.544 fused_ordering(77) 00:10:38.544 fused_ordering(78) 00:10:38.544 fused_ordering(79) 00:10:38.544 fused_ordering(80) 00:10:38.544 fused_ordering(81) 00:10:38.544 fused_ordering(82) 00:10:38.544 fused_ordering(83) 00:10:38.544 fused_ordering(84) 00:10:38.544 fused_ordering(85) 00:10:38.544 fused_ordering(86) 00:10:38.544 fused_ordering(87) 00:10:38.544 fused_ordering(88) 00:10:38.544 fused_ordering(89) 00:10:38.544 fused_ordering(90) 00:10:38.544 fused_ordering(91) 00:10:38.544 fused_ordering(92) 00:10:38.544 fused_ordering(93) 00:10:38.544 fused_ordering(94) 00:10:38.544 fused_ordering(95) 00:10:38.544 fused_ordering(96) 00:10:38.544 fused_ordering(97) 00:10:38.544 fused_ordering(98) 00:10:38.544 fused_ordering(99) 00:10:38.544 fused_ordering(100) 00:10:38.544 fused_ordering(101) 00:10:38.544 fused_ordering(102) 00:10:38.544 fused_ordering(103) 00:10:38.544 fused_ordering(104) 00:10:38.544 fused_ordering(105) 00:10:38.544 fused_ordering(106) 00:10:38.544 fused_ordering(107) 00:10:38.544 fused_ordering(108) 00:10:38.544 fused_ordering(109) 00:10:38.544 fused_ordering(110) 00:10:38.544 fused_ordering(111) 00:10:38.544 fused_ordering(112) 00:10:38.544 fused_ordering(113) 00:10:38.544 fused_ordering(114) 00:10:38.544 fused_ordering(115) 00:10:38.544 fused_ordering(116) 00:10:38.544 fused_ordering(117) 00:10:38.544 fused_ordering(118) 00:10:38.544 fused_ordering(119) 00:10:38.544 fused_ordering(120) 00:10:38.544 fused_ordering(121) 00:10:38.544 fused_ordering(122) 00:10:38.544 fused_ordering(123) 00:10:38.544 fused_ordering(124) 00:10:38.544 fused_ordering(125) 00:10:38.544 fused_ordering(126) 00:10:38.544 fused_ordering(127) 00:10:38.544 fused_ordering(128) 00:10:38.544 fused_ordering(129) 00:10:38.544 fused_ordering(130) 00:10:38.544 fused_ordering(131) 00:10:38.544 fused_ordering(132) 00:10:38.544 fused_ordering(133) 00:10:38.544 fused_ordering(134) 00:10:38.544 fused_ordering(135) 00:10:38.544 fused_ordering(136) 00:10:38.544 fused_ordering(137) 00:10:38.544 fused_ordering(138) 00:10:38.544 fused_ordering(139) 00:10:38.544 fused_ordering(140) 00:10:38.544 fused_ordering(141) 00:10:38.544 fused_ordering(142) 00:10:38.544 fused_ordering(143) 00:10:38.544 fused_ordering(144) 00:10:38.544 fused_ordering(145) 00:10:38.544 fused_ordering(146) 00:10:38.544 fused_ordering(147) 00:10:38.544 fused_ordering(148) 00:10:38.544 fused_ordering(149) 00:10:38.544 fused_ordering(150) 00:10:38.544 fused_ordering(151) 00:10:38.544 fused_ordering(152) 00:10:38.544 fused_ordering(153) 00:10:38.544 fused_ordering(154) 00:10:38.544 fused_ordering(155) 00:10:38.544 fused_ordering(156) 00:10:38.545 fused_ordering(157) 00:10:38.545 fused_ordering(158) 00:10:38.545 fused_ordering(159) 00:10:38.545 fused_ordering(160) 00:10:38.545 fused_ordering(161) 00:10:38.545 fused_ordering(162) 00:10:38.545 fused_ordering(163) 00:10:38.545 fused_ordering(164) 00:10:38.545 fused_ordering(165) 00:10:38.545 fused_ordering(166) 00:10:38.545 fused_ordering(167) 00:10:38.545 fused_ordering(168) 00:10:38.545 fused_ordering(169) 00:10:38.545 fused_ordering(170) 00:10:38.545 fused_ordering(171) 00:10:38.545 fused_ordering(172) 00:10:38.545 fused_ordering(173) 00:10:38.545 fused_ordering(174) 00:10:38.545 fused_ordering(175) 00:10:38.545 fused_ordering(176) 00:10:38.545 fused_ordering(177) 00:10:38.545 fused_ordering(178) 00:10:38.545 fused_ordering(179) 00:10:38.545 fused_ordering(180) 00:10:38.545 fused_ordering(181) 00:10:38.545 fused_ordering(182) 00:10:38.545 fused_ordering(183) 00:10:38.545 fused_ordering(184) 00:10:38.545 fused_ordering(185) 00:10:38.545 fused_ordering(186) 00:10:38.545 fused_ordering(187) 00:10:38.545 fused_ordering(188) 00:10:38.545 fused_ordering(189) 00:10:38.545 fused_ordering(190) 00:10:38.545 fused_ordering(191) 00:10:38.545 fused_ordering(192) 00:10:38.545 fused_ordering(193) 00:10:38.545 fused_ordering(194) 00:10:38.545 fused_ordering(195) 00:10:38.545 fused_ordering(196) 00:10:38.545 fused_ordering(197) 00:10:38.545 fused_ordering(198) 00:10:38.545 fused_ordering(199) 00:10:38.545 fused_ordering(200) 00:10:38.545 fused_ordering(201) 00:10:38.545 fused_ordering(202) 00:10:38.545 fused_ordering(203) 00:10:38.545 fused_ordering(204) 00:10:38.545 fused_ordering(205) 00:10:38.804 fused_ordering(206) 00:10:38.804 fused_ordering(207) 00:10:38.804 fused_ordering(208) 00:10:38.804 fused_ordering(209) 00:10:38.804 fused_ordering(210) 00:10:38.804 fused_ordering(211) 00:10:38.804 fused_ordering(212) 00:10:38.804 fused_ordering(213) 00:10:38.804 fused_ordering(214) 00:10:38.804 fused_ordering(215) 00:10:38.804 fused_ordering(216) 00:10:38.804 fused_ordering(217) 00:10:38.804 fused_ordering(218) 00:10:38.804 fused_ordering(219) 00:10:38.804 fused_ordering(220) 00:10:38.804 fused_ordering(221) 00:10:38.804 fused_ordering(222) 00:10:38.804 fused_ordering(223) 00:10:38.804 fused_ordering(224) 00:10:38.804 fused_ordering(225) 00:10:38.804 fused_ordering(226) 00:10:38.804 fused_ordering(227) 00:10:38.804 fused_ordering(228) 00:10:38.804 fused_ordering(229) 00:10:38.804 fused_ordering(230) 00:10:38.804 fused_ordering(231) 00:10:38.804 fused_ordering(232) 00:10:38.804 fused_ordering(233) 00:10:38.804 fused_ordering(234) 00:10:38.804 fused_ordering(235) 00:10:38.804 fused_ordering(236) 00:10:38.804 fused_ordering(237) 00:10:38.804 fused_ordering(238) 00:10:38.804 fused_ordering(239) 00:10:38.804 fused_ordering(240) 00:10:38.804 fused_ordering(241) 00:10:38.804 fused_ordering(242) 00:10:38.804 fused_ordering(243) 00:10:38.804 fused_ordering(244) 00:10:38.804 fused_ordering(245) 00:10:38.804 fused_ordering(246) 00:10:38.804 fused_ordering(247) 00:10:38.804 fused_ordering(248) 00:10:38.804 fused_ordering(249) 00:10:38.804 fused_ordering(250) 00:10:38.804 fused_ordering(251) 00:10:38.804 fused_ordering(252) 00:10:38.804 fused_ordering(253) 00:10:38.804 fused_ordering(254) 00:10:38.804 fused_ordering(255) 00:10:38.804 fused_ordering(256) 00:10:38.804 fused_ordering(257) 00:10:38.804 fused_ordering(258) 00:10:38.804 fused_ordering(259) 00:10:38.804 fused_ordering(260) 00:10:38.804 fused_ordering(261) 00:10:38.804 fused_ordering(262) 00:10:38.804 fused_ordering(263) 00:10:38.804 fused_ordering(264) 00:10:38.804 fused_ordering(265) 00:10:38.804 fused_ordering(266) 00:10:38.804 fused_ordering(267) 00:10:38.804 fused_ordering(268) 00:10:38.804 fused_ordering(269) 00:10:38.804 fused_ordering(270) 00:10:38.804 fused_ordering(271) 00:10:38.804 fused_ordering(272) 00:10:38.804 fused_ordering(273) 00:10:38.804 fused_ordering(274) 00:10:38.804 fused_ordering(275) 00:10:38.804 fused_ordering(276) 00:10:38.804 fused_ordering(277) 00:10:38.804 fused_ordering(278) 00:10:38.804 fused_ordering(279) 00:10:38.804 fused_ordering(280) 00:10:38.804 fused_ordering(281) 00:10:38.804 fused_ordering(282) 00:10:38.804 fused_ordering(283) 00:10:38.804 fused_ordering(284) 00:10:38.804 fused_ordering(285) 00:10:38.804 fused_ordering(286) 00:10:38.804 fused_ordering(287) 00:10:38.804 fused_ordering(288) 00:10:38.804 fused_ordering(289) 00:10:38.804 fused_ordering(290) 00:10:38.804 fused_ordering(291) 00:10:38.804 fused_ordering(292) 00:10:38.804 fused_ordering(293) 00:10:38.804 fused_ordering(294) 00:10:38.804 fused_ordering(295) 00:10:38.804 fused_ordering(296) 00:10:38.804 fused_ordering(297) 00:10:38.804 fused_ordering(298) 00:10:38.804 fused_ordering(299) 00:10:38.804 fused_ordering(300) 00:10:38.804 fused_ordering(301) 00:10:38.804 fused_ordering(302) 00:10:38.804 fused_ordering(303) 00:10:38.804 fused_ordering(304) 00:10:38.804 fused_ordering(305) 00:10:38.804 fused_ordering(306) 00:10:38.804 fused_ordering(307) 00:10:38.804 fused_ordering(308) 00:10:38.804 fused_ordering(309) 00:10:38.804 fused_ordering(310) 00:10:38.804 fused_ordering(311) 00:10:38.804 fused_ordering(312) 00:10:38.804 fused_ordering(313) 00:10:38.804 fused_ordering(314) 00:10:38.804 fused_ordering(315) 00:10:38.804 fused_ordering(316) 00:10:38.804 fused_ordering(317) 00:10:38.804 fused_ordering(318) 00:10:38.804 fused_ordering(319) 00:10:38.804 fused_ordering(320) 00:10:38.804 fused_ordering(321) 00:10:38.804 fused_ordering(322) 00:10:38.804 fused_ordering(323) 00:10:38.804 fused_ordering(324) 00:10:38.804 fused_ordering(325) 00:10:38.804 fused_ordering(326) 00:10:38.804 fused_ordering(327) 00:10:38.804 fused_ordering(328) 00:10:38.804 fused_ordering(329) 00:10:38.804 fused_ordering(330) 00:10:38.804 fused_ordering(331) 00:10:38.804 fused_ordering(332) 00:10:38.804 fused_ordering(333) 00:10:38.805 fused_ordering(334) 00:10:38.805 fused_ordering(335) 00:10:38.805 fused_ordering(336) 00:10:38.805 fused_ordering(337) 00:10:38.805 fused_ordering(338) 00:10:38.805 fused_ordering(339) 00:10:38.805 fused_ordering(340) 00:10:38.805 fused_ordering(341) 00:10:38.805 fused_ordering(342) 00:10:38.805 fused_ordering(343) 00:10:38.805 fused_ordering(344) 00:10:38.805 fused_ordering(345) 00:10:38.805 fused_ordering(346) 00:10:38.805 fused_ordering(347) 00:10:38.805 fused_ordering(348) 00:10:38.805 fused_ordering(349) 00:10:38.805 fused_ordering(350) 00:10:38.805 fused_ordering(351) 00:10:38.805 fused_ordering(352) 00:10:38.805 fused_ordering(353) 00:10:38.805 fused_ordering(354) 00:10:38.805 fused_ordering(355) 00:10:38.805 fused_ordering(356) 00:10:38.805 fused_ordering(357) 00:10:38.805 fused_ordering(358) 00:10:38.805 fused_ordering(359) 00:10:38.805 fused_ordering(360) 00:10:38.805 fused_ordering(361) 00:10:38.805 fused_ordering(362) 00:10:38.805 fused_ordering(363) 00:10:38.805 fused_ordering(364) 00:10:38.805 fused_ordering(365) 00:10:38.805 fused_ordering(366) 00:10:38.805 fused_ordering(367) 00:10:38.805 fused_ordering(368) 00:10:38.805 fused_ordering(369) 00:10:38.805 fused_ordering(370) 00:10:38.805 fused_ordering(371) 00:10:38.805 fused_ordering(372) 00:10:38.805 fused_ordering(373) 00:10:38.805 fused_ordering(374) 00:10:38.805 fused_ordering(375) 00:10:38.805 fused_ordering(376) 00:10:38.805 fused_ordering(377) 00:10:38.805 fused_ordering(378) 00:10:38.805 fused_ordering(379) 00:10:38.805 fused_ordering(380) 00:10:38.805 fused_ordering(381) 00:10:38.805 fused_ordering(382) 00:10:38.805 fused_ordering(383) 00:10:38.805 fused_ordering(384) 00:10:38.805 fused_ordering(385) 00:10:38.805 fused_ordering(386) 00:10:38.805 fused_ordering(387) 00:10:38.805 fused_ordering(388) 00:10:38.805 fused_ordering(389) 00:10:38.805 fused_ordering(390) 00:10:38.805 fused_ordering(391) 00:10:38.805 fused_ordering(392) 00:10:38.805 fused_ordering(393) 00:10:38.805 fused_ordering(394) 00:10:38.805 fused_ordering(395) 00:10:38.805 fused_ordering(396) 00:10:38.805 fused_ordering(397) 00:10:38.805 fused_ordering(398) 00:10:38.805 fused_ordering(399) 00:10:38.805 fused_ordering(400) 00:10:38.805 fused_ordering(401) 00:10:38.805 fused_ordering(402) 00:10:38.805 fused_ordering(403) 00:10:38.805 fused_ordering(404) 00:10:38.805 fused_ordering(405) 00:10:38.805 fused_ordering(406) 00:10:38.805 fused_ordering(407) 00:10:38.805 fused_ordering(408) 00:10:38.805 fused_ordering(409) 00:10:38.805 fused_ordering(410) 00:10:39.063 fused_ordering(411) 00:10:39.063 fused_ordering(412) 00:10:39.063 fused_ordering(413) 00:10:39.063 fused_ordering(414) 00:10:39.063 fused_ordering(415) 00:10:39.063 fused_ordering(416) 00:10:39.063 fused_ordering(417) 00:10:39.063 fused_ordering(418) 00:10:39.063 fused_ordering(419) 00:10:39.063 fused_ordering(420) 00:10:39.063 fused_ordering(421) 00:10:39.063 fused_ordering(422) 00:10:39.063 fused_ordering(423) 00:10:39.063 fused_ordering(424) 00:10:39.063 fused_ordering(425) 00:10:39.063 fused_ordering(426) 00:10:39.063 fused_ordering(427) 00:10:39.063 fused_ordering(428) 00:10:39.063 fused_ordering(429) 00:10:39.063 fused_ordering(430) 00:10:39.063 fused_ordering(431) 00:10:39.063 fused_ordering(432) 00:10:39.063 fused_ordering(433) 00:10:39.063 fused_ordering(434) 00:10:39.063 fused_ordering(435) 00:10:39.063 fused_ordering(436) 00:10:39.063 fused_ordering(437) 00:10:39.063 fused_ordering(438) 00:10:39.063 fused_ordering(439) 00:10:39.063 fused_ordering(440) 00:10:39.063 fused_ordering(441) 00:10:39.063 fused_ordering(442) 00:10:39.063 fused_ordering(443) 00:10:39.063 fused_ordering(444) 00:10:39.063 fused_ordering(445) 00:10:39.063 fused_ordering(446) 00:10:39.063 fused_ordering(447) 00:10:39.063 fused_ordering(448) 00:10:39.063 fused_ordering(449) 00:10:39.063 fused_ordering(450) 00:10:39.063 fused_ordering(451) 00:10:39.063 fused_ordering(452) 00:10:39.063 fused_ordering(453) 00:10:39.063 fused_ordering(454) 00:10:39.063 fused_ordering(455) 00:10:39.063 fused_ordering(456) 00:10:39.063 fused_ordering(457) 00:10:39.063 fused_ordering(458) 00:10:39.063 fused_ordering(459) 00:10:39.063 fused_ordering(460) 00:10:39.063 fused_ordering(461) 00:10:39.063 fused_ordering(462) 00:10:39.063 fused_ordering(463) 00:10:39.063 fused_ordering(464) 00:10:39.063 fused_ordering(465) 00:10:39.063 fused_ordering(466) 00:10:39.063 fused_ordering(467) 00:10:39.063 fused_ordering(468) 00:10:39.063 fused_ordering(469) 00:10:39.063 fused_ordering(470) 00:10:39.063 fused_ordering(471) 00:10:39.063 fused_ordering(472) 00:10:39.063 fused_ordering(473) 00:10:39.063 fused_ordering(474) 00:10:39.063 fused_ordering(475) 00:10:39.063 fused_ordering(476) 00:10:39.063 fused_ordering(477) 00:10:39.063 fused_ordering(478) 00:10:39.063 fused_ordering(479) 00:10:39.063 fused_ordering(480) 00:10:39.063 fused_ordering(481) 00:10:39.063 fused_ordering(482) 00:10:39.063 fused_ordering(483) 00:10:39.063 fused_ordering(484) 00:10:39.063 fused_ordering(485) 00:10:39.063 fused_ordering(486) 00:10:39.063 fused_ordering(487) 00:10:39.063 fused_ordering(488) 00:10:39.063 fused_ordering(489) 00:10:39.063 fused_ordering(490) 00:10:39.063 fused_ordering(491) 00:10:39.063 fused_ordering(492) 00:10:39.063 fused_ordering(493) 00:10:39.063 fused_ordering(494) 00:10:39.063 fused_ordering(495) 00:10:39.063 fused_ordering(496) 00:10:39.063 fused_ordering(497) 00:10:39.063 fused_ordering(498) 00:10:39.063 fused_ordering(499) 00:10:39.063 fused_ordering(500) 00:10:39.063 fused_ordering(501) 00:10:39.063 fused_ordering(502) 00:10:39.063 fused_ordering(503) 00:10:39.063 fused_ordering(504) 00:10:39.063 fused_ordering(505) 00:10:39.063 fused_ordering(506) 00:10:39.063 fused_ordering(507) 00:10:39.063 fused_ordering(508) 00:10:39.063 fused_ordering(509) 00:10:39.063 fused_ordering(510) 00:10:39.063 fused_ordering(511) 00:10:39.063 fused_ordering(512) 00:10:39.063 fused_ordering(513) 00:10:39.063 fused_ordering(514) 00:10:39.063 fused_ordering(515) 00:10:39.063 fused_ordering(516) 00:10:39.063 fused_ordering(517) 00:10:39.063 fused_ordering(518) 00:10:39.063 fused_ordering(519) 00:10:39.063 fused_ordering(520) 00:10:39.063 fused_ordering(521) 00:10:39.063 fused_ordering(522) 00:10:39.063 fused_ordering(523) 00:10:39.063 fused_ordering(524) 00:10:39.063 fused_ordering(525) 00:10:39.063 fused_ordering(526) 00:10:39.063 fused_ordering(527) 00:10:39.063 fused_ordering(528) 00:10:39.063 fused_ordering(529) 00:10:39.063 fused_ordering(530) 00:10:39.063 fused_ordering(531) 00:10:39.063 fused_ordering(532) 00:10:39.063 fused_ordering(533) 00:10:39.063 fused_ordering(534) 00:10:39.063 fused_ordering(535) 00:10:39.063 fused_ordering(536) 00:10:39.063 fused_ordering(537) 00:10:39.063 fused_ordering(538) 00:10:39.063 fused_ordering(539) 00:10:39.063 fused_ordering(540) 00:10:39.063 fused_ordering(541) 00:10:39.063 fused_ordering(542) 00:10:39.063 fused_ordering(543) 00:10:39.063 fused_ordering(544) 00:10:39.063 fused_ordering(545) 00:10:39.063 fused_ordering(546) 00:10:39.063 fused_ordering(547) 00:10:39.063 fused_ordering(548) 00:10:39.063 fused_ordering(549) 00:10:39.063 fused_ordering(550) 00:10:39.063 fused_ordering(551) 00:10:39.063 fused_ordering(552) 00:10:39.063 fused_ordering(553) 00:10:39.063 fused_ordering(554) 00:10:39.063 fused_ordering(555) 00:10:39.063 fused_ordering(556) 00:10:39.063 fused_ordering(557) 00:10:39.063 fused_ordering(558) 00:10:39.063 fused_ordering(559) 00:10:39.063 fused_ordering(560) 00:10:39.063 fused_ordering(561) 00:10:39.063 fused_ordering(562) 00:10:39.063 fused_ordering(563) 00:10:39.063 fused_ordering(564) 00:10:39.063 fused_ordering(565) 00:10:39.063 fused_ordering(566) 00:10:39.063 fused_ordering(567) 00:10:39.063 fused_ordering(568) 00:10:39.063 fused_ordering(569) 00:10:39.063 fused_ordering(570) 00:10:39.063 fused_ordering(571) 00:10:39.063 fused_ordering(572) 00:10:39.063 fused_ordering(573) 00:10:39.063 fused_ordering(574) 00:10:39.063 fused_ordering(575) 00:10:39.063 fused_ordering(576) 00:10:39.063 fused_ordering(577) 00:10:39.063 fused_ordering(578) 00:10:39.063 fused_ordering(579) 00:10:39.063 fused_ordering(580) 00:10:39.063 fused_ordering(581) 00:10:39.063 fused_ordering(582) 00:10:39.063 fused_ordering(583) 00:10:39.063 fused_ordering(584) 00:10:39.063 fused_ordering(585) 00:10:39.063 fused_ordering(586) 00:10:39.063 fused_ordering(587) 00:10:39.063 fused_ordering(588) 00:10:39.063 fused_ordering(589) 00:10:39.063 fused_ordering(590) 00:10:39.063 fused_ordering(591) 00:10:39.063 fused_ordering(592) 00:10:39.063 fused_ordering(593) 00:10:39.063 fused_ordering(594) 00:10:39.063 fused_ordering(595) 00:10:39.063 fused_ordering(596) 00:10:39.063 fused_ordering(597) 00:10:39.063 fused_ordering(598) 00:10:39.063 fused_ordering(599) 00:10:39.063 fused_ordering(600) 00:10:39.063 fused_ordering(601) 00:10:39.063 fused_ordering(602) 00:10:39.063 fused_ordering(603) 00:10:39.063 fused_ordering(604) 00:10:39.063 fused_ordering(605) 00:10:39.063 fused_ordering(606) 00:10:39.063 fused_ordering(607) 00:10:39.063 fused_ordering(608) 00:10:39.063 fused_ordering(609) 00:10:39.063 fused_ordering(610) 00:10:39.063 fused_ordering(611) 00:10:39.063 fused_ordering(612) 00:10:39.063 fused_ordering(613) 00:10:39.063 fused_ordering(614) 00:10:39.063 fused_ordering(615) 00:10:39.321 fused_ordering(616) 00:10:39.321 fused_ordering(617) 00:10:39.321 fused_ordering(618) 00:10:39.321 fused_ordering(619) 00:10:39.321 fused_ordering(620) 00:10:39.321 fused_ordering(621) 00:10:39.321 fused_ordering(622) 00:10:39.321 fused_ordering(623) 00:10:39.321 fused_ordering(624) 00:10:39.321 fused_ordering(625) 00:10:39.321 fused_ordering(626) 00:10:39.321 fused_ordering(627) 00:10:39.321 fused_ordering(628) 00:10:39.321 fused_ordering(629) 00:10:39.321 fused_ordering(630) 00:10:39.321 fused_ordering(631) 00:10:39.321 fused_ordering(632) 00:10:39.321 fused_ordering(633) 00:10:39.321 fused_ordering(634) 00:10:39.321 fused_ordering(635) 00:10:39.321 fused_ordering(636) 00:10:39.321 fused_ordering(637) 00:10:39.321 fused_ordering(638) 00:10:39.321 fused_ordering(639) 00:10:39.321 fused_ordering(640) 00:10:39.321 fused_ordering(641) 00:10:39.321 fused_ordering(642) 00:10:39.321 fused_ordering(643) 00:10:39.321 fused_ordering(644) 00:10:39.321 fused_ordering(645) 00:10:39.321 fused_ordering(646) 00:10:39.321 fused_ordering(647) 00:10:39.321 fused_ordering(648) 00:10:39.321 fused_ordering(649) 00:10:39.321 fused_ordering(650) 00:10:39.321 fused_ordering(651) 00:10:39.321 fused_ordering(652) 00:10:39.321 fused_ordering(653) 00:10:39.321 fused_ordering(654) 00:10:39.321 fused_ordering(655) 00:10:39.321 fused_ordering(656) 00:10:39.321 fused_ordering(657) 00:10:39.321 fused_ordering(658) 00:10:39.321 fused_ordering(659) 00:10:39.321 fused_ordering(660) 00:10:39.321 fused_ordering(661) 00:10:39.321 fused_ordering(662) 00:10:39.321 fused_ordering(663) 00:10:39.321 fused_ordering(664) 00:10:39.321 fused_ordering(665) 00:10:39.321 fused_ordering(666) 00:10:39.321 fused_ordering(667) 00:10:39.321 fused_ordering(668) 00:10:39.321 fused_ordering(669) 00:10:39.321 fused_ordering(670) 00:10:39.321 fused_ordering(671) 00:10:39.321 fused_ordering(672) 00:10:39.321 fused_ordering(673) 00:10:39.321 fused_ordering(674) 00:10:39.321 fused_ordering(675) 00:10:39.321 fused_ordering(676) 00:10:39.321 fused_ordering(677) 00:10:39.321 fused_ordering(678) 00:10:39.321 fused_ordering(679) 00:10:39.321 fused_ordering(680) 00:10:39.321 fused_ordering(681) 00:10:39.321 fused_ordering(682) 00:10:39.321 fused_ordering(683) 00:10:39.321 fused_ordering(684) 00:10:39.321 fused_ordering(685) 00:10:39.321 fused_ordering(686) 00:10:39.321 fused_ordering(687) 00:10:39.321 fused_ordering(688) 00:10:39.321 fused_ordering(689) 00:10:39.321 fused_ordering(690) 00:10:39.321 fused_ordering(691) 00:10:39.321 fused_ordering(692) 00:10:39.321 fused_ordering(693) 00:10:39.321 fused_ordering(694) 00:10:39.321 fused_ordering(695) 00:10:39.321 fused_ordering(696) 00:10:39.321 fused_ordering(697) 00:10:39.321 fused_ordering(698) 00:10:39.321 fused_ordering(699) 00:10:39.321 fused_ordering(700) 00:10:39.321 fused_ordering(701) 00:10:39.321 fused_ordering(702) 00:10:39.321 fused_ordering(703) 00:10:39.321 fused_ordering(704) 00:10:39.321 fused_ordering(705) 00:10:39.321 fused_ordering(706) 00:10:39.321 fused_ordering(707) 00:10:39.321 fused_ordering(708) 00:10:39.321 fused_ordering(709) 00:10:39.321 fused_ordering(710) 00:10:39.321 fused_ordering(711) 00:10:39.321 fused_ordering(712) 00:10:39.321 fused_ordering(713) 00:10:39.321 fused_ordering(714) 00:10:39.321 fused_ordering(715) 00:10:39.321 fused_ordering(716) 00:10:39.321 fused_ordering(717) 00:10:39.321 fused_ordering(718) 00:10:39.321 fused_ordering(719) 00:10:39.321 fused_ordering(720) 00:10:39.321 fused_ordering(721) 00:10:39.321 fused_ordering(722) 00:10:39.321 fused_ordering(723) 00:10:39.321 fused_ordering(724) 00:10:39.321 fused_ordering(725) 00:10:39.321 fused_ordering(726) 00:10:39.321 fused_ordering(727) 00:10:39.321 fused_ordering(728) 00:10:39.321 fused_ordering(729) 00:10:39.321 fused_ordering(730) 00:10:39.321 fused_ordering(731) 00:10:39.321 fused_ordering(732) 00:10:39.321 fused_ordering(733) 00:10:39.321 fused_ordering(734) 00:10:39.321 fused_ordering(735) 00:10:39.321 fused_ordering(736) 00:10:39.321 fused_ordering(737) 00:10:39.321 fused_ordering(738) 00:10:39.321 fused_ordering(739) 00:10:39.321 fused_ordering(740) 00:10:39.321 fused_ordering(741) 00:10:39.321 fused_ordering(742) 00:10:39.321 fused_ordering(743) 00:10:39.321 fused_ordering(744) 00:10:39.321 fused_ordering(745) 00:10:39.321 fused_ordering(746) 00:10:39.321 fused_ordering(747) 00:10:39.321 fused_ordering(748) 00:10:39.321 fused_ordering(749) 00:10:39.321 fused_ordering(750) 00:10:39.321 fused_ordering(751) 00:10:39.321 fused_ordering(752) 00:10:39.321 fused_ordering(753) 00:10:39.321 fused_ordering(754) 00:10:39.321 fused_ordering(755) 00:10:39.321 fused_ordering(756) 00:10:39.322 fused_ordering(757) 00:10:39.322 fused_ordering(758) 00:10:39.322 fused_ordering(759) 00:10:39.322 fused_ordering(760) 00:10:39.322 fused_ordering(761) 00:10:39.322 fused_ordering(762) 00:10:39.322 fused_ordering(763) 00:10:39.322 fused_ordering(764) 00:10:39.322 fused_ordering(765) 00:10:39.322 fused_ordering(766) 00:10:39.322 fused_ordering(767) 00:10:39.322 fused_ordering(768) 00:10:39.322 fused_ordering(769) 00:10:39.322 fused_ordering(770) 00:10:39.322 fused_ordering(771) 00:10:39.322 fused_ordering(772) 00:10:39.322 fused_ordering(773) 00:10:39.322 fused_ordering(774) 00:10:39.322 fused_ordering(775) 00:10:39.322 fused_ordering(776) 00:10:39.322 fused_ordering(777) 00:10:39.322 fused_ordering(778) 00:10:39.322 fused_ordering(779) 00:10:39.322 fused_ordering(780) 00:10:39.322 fused_ordering(781) 00:10:39.322 fused_ordering(782) 00:10:39.322 fused_ordering(783) 00:10:39.322 fused_ordering(784) 00:10:39.322 fused_ordering(785) 00:10:39.322 fused_ordering(786) 00:10:39.322 fused_ordering(787) 00:10:39.322 fused_ordering(788) 00:10:39.322 fused_ordering(789) 00:10:39.322 fused_ordering(790) 00:10:39.322 fused_ordering(791) 00:10:39.322 fused_ordering(792) 00:10:39.322 fused_ordering(793) 00:10:39.322 fused_ordering(794) 00:10:39.322 fused_ordering(795) 00:10:39.322 fused_ordering(796) 00:10:39.322 fused_ordering(797) 00:10:39.322 fused_ordering(798) 00:10:39.322 fused_ordering(799) 00:10:39.322 fused_ordering(800) 00:10:39.322 fused_ordering(801) 00:10:39.322 fused_ordering(802) 00:10:39.322 fused_ordering(803) 00:10:39.322 fused_ordering(804) 00:10:39.322 fused_ordering(805) 00:10:39.322 fused_ordering(806) 00:10:39.322 fused_ordering(807) 00:10:39.322 fused_ordering(808) 00:10:39.322 fused_ordering(809) 00:10:39.322 fused_ordering(810) 00:10:39.322 fused_ordering(811) 00:10:39.322 fused_ordering(812) 00:10:39.322 fused_ordering(813) 00:10:39.322 fused_ordering(814) 00:10:39.322 fused_ordering(815) 00:10:39.322 fused_ordering(816) 00:10:39.322 fused_ordering(817) 00:10:39.322 fused_ordering(818) 00:10:39.322 fused_ordering(819) 00:10:39.322 fused_ordering(820) 00:10:39.888 fused_ordering(821) 00:10:39.888 fused_ordering(822) 00:10:39.888 fused_ordering(823) 00:10:39.888 fused_ordering(824) 00:10:39.888 fused_ordering(825) 00:10:39.888 fused_ordering(826) 00:10:39.888 fused_ordering(827) 00:10:39.888 fused_ordering(828) 00:10:39.888 fused_ordering(829) 00:10:39.888 fused_ordering(830) 00:10:39.888 fused_ordering(831) 00:10:39.888 fused_ordering(832) 00:10:39.888 fused_ordering(833) 00:10:39.888 fused_ordering(834) 00:10:39.888 fused_ordering(835) 00:10:39.888 fused_ordering(836) 00:10:39.888 fused_ordering(837) 00:10:39.888 fused_ordering(838) 00:10:39.888 fused_ordering(839) 00:10:39.888 fused_ordering(840) 00:10:39.888 fused_ordering(841) 00:10:39.888 fused_ordering(842) 00:10:39.888 fused_ordering(843) 00:10:39.888 fused_ordering(844) 00:10:39.888 fused_ordering(845) 00:10:39.888 fused_ordering(846) 00:10:39.888 fused_ordering(847) 00:10:39.888 fused_ordering(848) 00:10:39.888 fused_ordering(849) 00:10:39.888 fused_ordering(850) 00:10:39.888 fused_ordering(851) 00:10:39.888 fused_ordering(852) 00:10:39.888 fused_ordering(853) 00:10:39.888 fused_ordering(854) 00:10:39.888 fused_ordering(855) 00:10:39.888 fused_ordering(856) 00:10:39.888 fused_ordering(857) 00:10:39.888 fused_ordering(858) 00:10:39.888 fused_ordering(859) 00:10:39.888 fused_ordering(860) 00:10:39.888 fused_ordering(861) 00:10:39.888 fused_ordering(862) 00:10:39.888 fused_ordering(863) 00:10:39.888 fused_ordering(864) 00:10:39.888 fused_ordering(865) 00:10:39.888 fused_ordering(866) 00:10:39.888 fused_ordering(867) 00:10:39.888 fused_ordering(868) 00:10:39.888 fused_ordering(869) 00:10:39.888 fused_ordering(870) 00:10:39.888 fused_ordering(871) 00:10:39.888 fused_ordering(872) 00:10:39.888 fused_ordering(873) 00:10:39.888 fused_ordering(874) 00:10:39.888 fused_ordering(875) 00:10:39.888 fused_ordering(876) 00:10:39.888 fused_ordering(877) 00:10:39.888 fused_ordering(878) 00:10:39.888 fused_ordering(879) 00:10:39.888 fused_ordering(880) 00:10:39.888 fused_ordering(881) 00:10:39.888 fused_ordering(882) 00:10:39.888 fused_ordering(883) 00:10:39.888 fused_ordering(884) 00:10:39.888 fused_ordering(885) 00:10:39.888 fused_ordering(886) 00:10:39.888 fused_ordering(887) 00:10:39.888 fused_ordering(888) 00:10:39.888 fused_ordering(889) 00:10:39.888 fused_ordering(890) 00:10:39.888 fused_ordering(891) 00:10:39.888 fused_ordering(892) 00:10:39.888 fused_ordering(893) 00:10:39.888 fused_ordering(894) 00:10:39.888 fused_ordering(895) 00:10:39.888 fused_ordering(896) 00:10:39.888 fused_ordering(897) 00:10:39.888 fused_ordering(898) 00:10:39.888 fused_ordering(899) 00:10:39.888 fused_ordering(900) 00:10:39.888 fused_ordering(901) 00:10:39.888 fused_ordering(902) 00:10:39.888 fused_ordering(903) 00:10:39.888 fused_ordering(904) 00:10:39.888 fused_ordering(905) 00:10:39.888 fused_ordering(906) 00:10:39.888 fused_ordering(907) 00:10:39.888 fused_ordering(908) 00:10:39.888 fused_ordering(909) 00:10:39.888 fused_ordering(910) 00:10:39.888 fused_ordering(911) 00:10:39.888 fused_ordering(912) 00:10:39.888 fused_ordering(913) 00:10:39.888 fused_ordering(914) 00:10:39.888 fused_ordering(915) 00:10:39.888 fused_ordering(916) 00:10:39.888 fused_ordering(917) 00:10:39.888 fused_ordering(918) 00:10:39.888 fused_ordering(919) 00:10:39.888 fused_ordering(920) 00:10:39.888 fused_ordering(921) 00:10:39.888 fused_ordering(922) 00:10:39.888 fused_ordering(923) 00:10:39.888 fused_ordering(924) 00:10:39.888 fused_ordering(925) 00:10:39.888 fused_ordering(926) 00:10:39.888 fused_ordering(927) 00:10:39.888 fused_ordering(928) 00:10:39.888 fused_ordering(929) 00:10:39.888 fused_ordering(930) 00:10:39.888 fused_ordering(931) 00:10:39.888 fused_ordering(932) 00:10:39.888 fused_ordering(933) 00:10:39.888 fused_ordering(934) 00:10:39.888 fused_ordering(935) 00:10:39.888 fused_ordering(936) 00:10:39.888 fused_ordering(937) 00:10:39.888 fused_ordering(938) 00:10:39.888 fused_ordering(939) 00:10:39.888 fused_ordering(940) 00:10:39.888 fused_ordering(941) 00:10:39.888 fused_ordering(942) 00:10:39.888 fused_ordering(943) 00:10:39.888 fused_ordering(944) 00:10:39.888 fused_ordering(945) 00:10:39.888 fused_ordering(946) 00:10:39.888 fused_ordering(947) 00:10:39.888 fused_ordering(948) 00:10:39.888 fused_ordering(949) 00:10:39.888 fused_ordering(950) 00:10:39.888 fused_ordering(951) 00:10:39.888 fused_ordering(952) 00:10:39.888 fused_ordering(953) 00:10:39.888 fused_ordering(954) 00:10:39.888 fused_ordering(955) 00:10:39.888 fused_ordering(956) 00:10:39.888 fused_ordering(957) 00:10:39.888 fused_ordering(958) 00:10:39.888 fused_ordering(959) 00:10:39.888 fused_ordering(960) 00:10:39.888 fused_ordering(961) 00:10:39.888 fused_ordering(962) 00:10:39.888 fused_ordering(963) 00:10:39.888 fused_ordering(964) 00:10:39.888 fused_ordering(965) 00:10:39.888 fused_ordering(966) 00:10:39.888 fused_ordering(967) 00:10:39.888 fused_ordering(968) 00:10:39.888 fused_ordering(969) 00:10:39.888 fused_ordering(970) 00:10:39.888 fused_ordering(971) 00:10:39.888 fused_ordering(972) 00:10:39.888 fused_ordering(973) 00:10:39.888 fused_ordering(974) 00:10:39.888 fused_ordering(975) 00:10:39.888 fused_ordering(976) 00:10:39.888 fused_ordering(977) 00:10:39.888 fused_ordering(978) 00:10:39.888 fused_ordering(979) 00:10:39.888 fused_ordering(980) 00:10:39.888 fused_ordering(981) 00:10:39.888 fused_ordering(982) 00:10:39.888 fused_ordering(983) 00:10:39.888 fused_ordering(984) 00:10:39.888 fused_ordering(985) 00:10:39.888 fused_ordering(986) 00:10:39.888 fused_ordering(987) 00:10:39.888 fused_ordering(988) 00:10:39.888 fused_ordering(989) 00:10:39.888 fused_ordering(990) 00:10:39.888 fused_ordering(991) 00:10:39.888 fused_ordering(992) 00:10:39.888 fused_ordering(993) 00:10:39.888 fused_ordering(994) 00:10:39.888 fused_ordering(995) 00:10:39.888 fused_ordering(996) 00:10:39.888 fused_ordering(997) 00:10:39.888 fused_ordering(998) 00:10:39.888 fused_ordering(999) 00:10:39.888 fused_ordering(1000) 00:10:39.888 fused_ordering(1001) 00:10:39.888 fused_ordering(1002) 00:10:39.888 fused_ordering(1003) 00:10:39.888 fused_ordering(1004) 00:10:39.888 fused_ordering(1005) 00:10:39.888 fused_ordering(1006) 00:10:39.888 fused_ordering(1007) 00:10:39.888 fused_ordering(1008) 00:10:39.888 fused_ordering(1009) 00:10:39.888 fused_ordering(1010) 00:10:39.888 fused_ordering(1011) 00:10:39.888 fused_ordering(1012) 00:10:39.888 fused_ordering(1013) 00:10:39.888 fused_ordering(1014) 00:10:39.888 fused_ordering(1015) 00:10:39.888 fused_ordering(1016) 00:10:39.888 fused_ordering(1017) 00:10:39.888 fused_ordering(1018) 00:10:39.888 fused_ordering(1019) 00:10:39.888 fused_ordering(1020) 00:10:39.888 fused_ordering(1021) 00:10:39.888 fused_ordering(1022) 00:10:39.888 fused_ordering(1023) 00:10:39.888 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:39.888 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:39.888 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:39.888 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:10:39.888 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.888 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:10:39.888 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.888 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.888 rmmod nvme_tcp 00:10:39.888 rmmod nvme_fabrics 00:10:39.888 rmmod nvme_keyring 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 75095 ']' 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 75095 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 75095 ']' 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 75095 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75095 00:10:40.146 killing process with pid 75095 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75095' 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 75095 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 75095 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:40.146 12:14:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:40.146 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:10:40.404 00:10:40.404 real 0m3.565s 00:10:40.404 user 0m3.860s 00:10:40.404 sys 0m1.327s 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.404 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:40.404 ************************************ 00:10:40.404 END TEST nvmf_fused_ordering 00:10:40.404 ************************************ 00:10:40.663 12:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:40.664 ************************************ 00:10:40.664 START TEST nvmf_ns_masking 00:10:40.664 ************************************ 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:40.664 * Looking for test storage... 00:10:40.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:40.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.664 --rc genhtml_branch_coverage=1 00:10:40.664 --rc genhtml_function_coverage=1 00:10:40.664 --rc genhtml_legend=1 00:10:40.664 --rc geninfo_all_blocks=1 00:10:40.664 --rc geninfo_unexecuted_blocks=1 00:10:40.664 00:10:40.664 ' 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:40.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.664 --rc genhtml_branch_coverage=1 00:10:40.664 --rc genhtml_function_coverage=1 00:10:40.664 --rc genhtml_legend=1 00:10:40.664 --rc geninfo_all_blocks=1 00:10:40.664 --rc geninfo_unexecuted_blocks=1 00:10:40.664 00:10:40.664 ' 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:40.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.664 --rc genhtml_branch_coverage=1 00:10:40.664 --rc genhtml_function_coverage=1 00:10:40.664 --rc genhtml_legend=1 00:10:40.664 --rc geninfo_all_blocks=1 00:10:40.664 --rc geninfo_unexecuted_blocks=1 00:10:40.664 00:10:40.664 ' 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:40.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.664 --rc genhtml_branch_coverage=1 00:10:40.664 --rc genhtml_function_coverage=1 00:10:40.664 --rc genhtml_legend=1 00:10:40.664 --rc geninfo_all_blocks=1 00:10:40.664 --rc geninfo_unexecuted_blocks=1 00:10:40.664 00:10:40.664 ' 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.664 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.665 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f33b3026-314e-498f-baab-6ae93ea2f9cb 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=558192a5-ad8d-465a-aa04-6ea5422cf157 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=9a446815-3a19-4771-a052-569178bae05f 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:40.665 Cannot find device "nvmf_init_br" 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:40.665 Cannot find device "nvmf_init_br2" 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:40.665 Cannot find device "nvmf_tgt_br" 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:10:40.665 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.924 Cannot find device "nvmf_tgt_br2" 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:40.924 Cannot find device "nvmf_init_br" 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:40.924 Cannot find device "nvmf_init_br2" 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:40.924 Cannot find device "nvmf_tgt_br" 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:40.924 Cannot find device "nvmf_tgt_br2" 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:40.924 Cannot find device "nvmf_br" 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:40.924 Cannot find device "nvmf_init_if" 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:40.924 Cannot find device "nvmf_init_if2" 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:40.924 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:41.184 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.184 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:41.184 00:10:41.184 --- 10.0.0.3 ping statistics --- 00:10:41.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.184 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:41.184 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:41.184 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:10:41.184 00:10:41.184 --- 10.0.0.4 ping statistics --- 00:10:41.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.184 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:41.184 00:10:41.184 --- 10.0.0.1 ping statistics --- 00:10:41.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.184 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:41.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:10:41.184 00:10:41.184 --- 10.0.0.2 ping statistics --- 00:10:41.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.184 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=75373 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 75373 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75373 ']' 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.184 12:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:41.184 [2024-12-05 12:14:11.964036] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:10:41.184 [2024-12-05 12:14:11.964114] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.443 [2024-12-05 12:14:12.110975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.443 [2024-12-05 12:14:12.163242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.443 [2024-12-05 12:14:12.163321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.443 [2024-12-05 12:14:12.163336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.443 [2024-12-05 12:14:12.163347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.443 [2024-12-05 12:14:12.163356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.443 [2024-12-05 12:14:12.163812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.443 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.443 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:10:41.443 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:41.443 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:41.443 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:41.702 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.702 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:41.961 [2024-12-05 12:14:12.619487] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.961 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:41.961 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:41.961 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:42.220 Malloc1 00:10:42.220 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:42.479 Malloc2 00:10:42.479 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:42.738 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:42.997 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:43.256 [2024-12-05 12:14:14.017897] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:43.256 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:43.256 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9a446815-3a19-4771-a052-569178bae05f -a 10.0.0.3 -s 4420 -i 4 00:10:43.515 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:43.515 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:10:43.515 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.515 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:43.515 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:10:45.419 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:45.419 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:45.419 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.419 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:45.419 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.419 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:10:45.419 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:45.420 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:45.420 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:45.420 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:45.420 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:10:45.420 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:45.420 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:45.420 [ 0]:0x1 00:10:45.420 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:45.420 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:45.420 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54927066687b4ce59c67cb9b91c92a46 00:10:45.420 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54927066687b4ce59c67cb9b91c92a46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:45.420 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:45.995 [ 0]:0x1 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54927066687b4ce59c67cb9b91c92a46 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54927066687b4ce59c67cb9b91c92a46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:45.995 [ 1]:0x2 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:45.995 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c7b641a696fc4af980a0824bd460c30b 00:10:45.996 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c7b641a696fc4af980a0824bd460c30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:45.996 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:10:45.996 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.996 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.269 12:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:46.527 12:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:10:46.527 12:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9a446815-3a19-4771-a052-569178bae05f -a 10.0.0.3 -s 4420 -i 4 00:10:46.527 12:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:46.528 12:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:10:46.528 12:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.528 12:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:10:46.528 12:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:10:46.528 12:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:49.058 [ 0]:0x2 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c7b641a696fc4af980a0824bd460c30b 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c7b641a696fc4af980a0824bd460c30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:49.058 [ 0]:0x1 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54927066687b4ce59c67cb9b91c92a46 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54927066687b4ce59c67cb9b91c92a46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:10:49.058 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:49.059 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:49.059 [ 1]:0x2 00:10:49.059 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:49.059 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:49.059 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c7b641a696fc4af980a0824bd460c30b 00:10:49.059 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c7b641a696fc4af980a0824bd460c30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:49.059 12:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:49.316 [ 0]:0x2 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:49.316 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:49.573 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c7b641a696fc4af980a0824bd460c30b 00:10:49.573 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c7b641a696fc4af980a0824bd460c30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:49.573 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:10:49.574 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.574 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:49.831 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:10:49.831 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9a446815-3a19-4771-a052-569178bae05f -a 10.0.0.3 -s 4420 -i 4 00:10:49.831 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:49.831 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:10:49.831 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.831 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:10:49.831 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:10:49.831 12:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:10:52.371 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:52.371 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:52.371 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.371 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:10:52.371 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.371 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:10:52.371 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:52.371 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:52.371 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:52.372 [ 0]:0x1 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54927066687b4ce59c67cb9b91c92a46 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54927066687b4ce59c67cb9b91c92a46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:52.372 [ 1]:0x2 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c7b641a696fc4af980a0824bd460c30b 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c7b641a696fc4af980a0824bd460c30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:52.372 12:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:52.372 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:52.372 [ 0]:0x2 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c7b641a696fc4af980a0824bd460c30b 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c7b641a696fc4af980a0824bd460c30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:52.632 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:52.632 [2024-12-05 12:14:23.496348] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:52.891 2024/12/05 12:14:23 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:10:52.891 request: 00:10:52.891 { 00:10:52.891 "method": "nvmf_ns_remove_host", 00:10:52.891 "params": { 00:10:52.891 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.891 "nsid": 2, 00:10:52.891 "host": "nqn.2016-06.io.spdk:host1" 00:10:52.891 } 00:10:52.891 } 00:10:52.891 Got JSON-RPC error response 00:10:52.891 GoRPCClient: error on JSON-RPC call 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:52.891 [ 0]:0x2 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c7b641a696fc4af980a0824bd460c30b 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c7b641a696fc4af980a0824bd460c30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=75736 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 75736 /var/tmp/host.sock 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75736 ']' 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:52.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.891 12:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:52.891 [2024-12-05 12:14:23.720971] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:10:52.891 [2024-12-05 12:14:23.721078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75736 ] 00:10:53.150 [2024-12-05 12:14:23.873186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.150 [2024-12-05 12:14:23.934613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.409 12:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.409 12:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:10:53.409 12:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.977 12:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:53.977 12:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f33b3026-314e-498f-baab-6ae93ea2f9cb 00:10:53.977 12:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:10:53.977 12:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F33B3026314E498FBAAB6AE93EA2F9CB -i 00:10:54.235 12:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 558192a5-ad8d-465a-aa04-6ea5422cf157 00:10:54.235 12:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:10:54.235 12:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 558192A5AD8D465AAA046EA5422CF157 -i 00:10:54.494 12:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:54.751 12:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:55.009 12:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:55.009 12:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:55.268 nvme0n1 00:10:55.268 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:55.268 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:55.527 nvme1n2 00:10:55.527 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:55.527 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:55.527 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:55.527 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:55.527 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:55.786 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:55.786 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:55.786 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:55.786 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:56.354 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f33b3026-314e-498f-baab-6ae93ea2f9cb == \f\3\3\b\3\0\2\6\-\3\1\4\e\-\4\9\8\f\-\b\a\a\b\-\6\a\e\9\3\e\a\2\f\9\c\b ]] 00:10:56.354 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:56.354 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:56.355 12:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:56.355 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 558192a5-ad8d-465a-aa04-6ea5422cf157 == \5\5\8\1\9\2\a\5\-\a\d\8\d\-\4\6\5\a\-\a\a\0\4\-\6\e\a\5\4\2\2\c\f\1\5\7 ]] 00:10:56.355 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.614 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid f33b3026-314e-498f-baab-6ae93ea2f9cb 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F33B3026314E498FBAAB6AE93EA2F9CB 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F33B3026314E498FBAAB6AE93EA2F9CB 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:56.873 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F33B3026314E498FBAAB6AE93EA2F9CB 00:10:57.132 [2024-12-05 12:14:27.946141] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:10:57.132 [2024-12-05 12:14:27.946203] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:10:57.132 [2024-12-05 12:14:27.946215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.132 2024/12/05 12:14:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:F33B3026314E498FBAAB6AE93EA2F9CB no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:57.132 request: 00:10:57.132 { 00:10:57.132 "method": "nvmf_subsystem_add_ns", 00:10:57.132 "params": { 00:10:57.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:57.132 "namespace": { 00:10:57.132 "bdev_name": "invalid", 00:10:57.132 "nsid": 1, 00:10:57.132 "nguid": "F33B3026314E498FBAAB6AE93EA2F9CB", 00:10:57.132 "no_auto_visible": false, 00:10:57.132 "hide_metadata": false 00:10:57.132 } 00:10:57.132 } 00:10:57.132 } 00:10:57.132 Got JSON-RPC error response 00:10:57.132 GoRPCClient: error on JSON-RPC call 00:10:57.132 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:10:57.132 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:57.132 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:57.132 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:57.132 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid f33b3026-314e-498f-baab-6ae93ea2f9cb 00:10:57.132 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:10:57.133 12:14:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F33B3026314E498FBAAB6AE93EA2F9CB -i 00:10:57.391 12:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 75736 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75736 ']' 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75736 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75736 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:59.924 killing process with pid 75736 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75736' 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75736 00:10:59.924 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75736 00:11:00.192 12:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.467 rmmod nvme_tcp 00:11:00.467 rmmod nvme_fabrics 00:11:00.467 rmmod nvme_keyring 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 75373 ']' 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 75373 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75373 ']' 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75373 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75373 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.467 killing process with pid 75373 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75373' 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75373 00:11:00.467 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75373 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:00.725 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:00.984 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:11:00.985 ************************************ 00:11:00.985 END TEST nvmf_ns_masking 00:11:00.985 ************************************ 00:11:00.985 00:11:00.985 real 0m20.507s 00:11:00.985 user 0m34.271s 00:11:00.985 sys 0m3.054s 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.985 ************************************ 00:11:00.985 START TEST nvmf_auth_target 00:11:00.985 ************************************ 00:11:00.985 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:01.244 * Looking for test storage... 00:11:01.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:01.244 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.244 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.244 12:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:01.244 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:01.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.245 --rc genhtml_branch_coverage=1 00:11:01.245 --rc genhtml_function_coverage=1 00:11:01.245 --rc genhtml_legend=1 00:11:01.245 --rc geninfo_all_blocks=1 00:11:01.245 --rc geninfo_unexecuted_blocks=1 00:11:01.245 00:11:01.245 ' 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:01.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.245 --rc genhtml_branch_coverage=1 00:11:01.245 --rc genhtml_function_coverage=1 00:11:01.245 --rc genhtml_legend=1 00:11:01.245 --rc geninfo_all_blocks=1 00:11:01.245 --rc geninfo_unexecuted_blocks=1 00:11:01.245 00:11:01.245 ' 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:01.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.245 --rc genhtml_branch_coverage=1 00:11:01.245 --rc genhtml_function_coverage=1 00:11:01.245 --rc genhtml_legend=1 00:11:01.245 --rc geninfo_all_blocks=1 00:11:01.245 --rc geninfo_unexecuted_blocks=1 00:11:01.245 00:11:01.245 ' 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:01.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.245 --rc genhtml_branch_coverage=1 00:11:01.245 --rc genhtml_function_coverage=1 00:11:01.245 --rc genhtml_legend=1 00:11:01.245 --rc geninfo_all_blocks=1 00:11:01.245 --rc geninfo_unexecuted_blocks=1 00:11:01.245 00:11:01.245 ' 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.245 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:01.245 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:01.246 Cannot find device "nvmf_init_br" 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:01.246 Cannot find device "nvmf_init_br2" 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:01.246 Cannot find device "nvmf_tgt_br" 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:01.246 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:01.504 Cannot find device "nvmf_tgt_br2" 00:11:01.504 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:01.504 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:01.504 Cannot find device "nvmf_init_br" 00:11:01.504 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:01.504 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:01.504 Cannot find device "nvmf_init_br2" 00:11:01.504 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:01.504 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:01.504 Cannot find device "nvmf_tgt_br" 00:11:01.504 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:01.504 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:01.504 Cannot find device "nvmf_tgt_br2" 00:11:01.504 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:01.504 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:01.504 Cannot find device "nvmf_br" 00:11:01.504 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:01.504 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:01.504 Cannot find device "nvmf_init_if" 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:01.505 Cannot find device "nvmf_init_if2" 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:01.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:01.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:01.505 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:01.764 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:01.764 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:11:01.764 00:11:01.764 --- 10.0.0.3 ping statistics --- 00:11:01.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.764 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:01.764 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:01.764 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:11:01.764 00:11:01.764 --- 10.0.0.4 ping statistics --- 00:11:01.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.764 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:01.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:01.764 00:11:01.764 --- 10.0.0.1 ping statistics --- 00:11:01.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.764 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:01.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:11:01.764 00:11:01.764 --- 10.0.0.2 ping statistics --- 00:11:01.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.764 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=76223 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 76223 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76223 ']' 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.764 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.022 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.022 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:02.022 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.022 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.022 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76248 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3a9afa2532b8a32555406be849569b6bef4060e1e4e2552b 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.GjL 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3a9afa2532b8a32555406be849569b6bef4060e1e4e2552b 0 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3a9afa2532b8a32555406be849569b6bef4060e1e4e2552b 0 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3a9afa2532b8a32555406be849569b6bef4060e1e4e2552b 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.GjL 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.GjL 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.GjL 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d3c08cf71ae27bf14727ca8865a877b0e0b32736348cd4ae73328830c87a004a 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.QgV 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d3c08cf71ae27bf14727ca8865a877b0e0b32736348cd4ae73328830c87a004a 3 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d3c08cf71ae27bf14727ca8865a877b0e0b32736348cd4ae73328830c87a004a 3 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d3c08cf71ae27bf14727ca8865a877b0e0b32736348cd4ae73328830c87a004a 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:02.281 12:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:02.281 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.QgV 00:11:02.281 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.QgV 00:11:02.281 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.QgV 00:11:02.281 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:02.281 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8f3baaef5a4dd413d9f3d98ff3714a76 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.b8y 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8f3baaef5a4dd413d9f3d98ff3714a76 1 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8f3baaef5a4dd413d9f3d98ff3714a76 1 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8f3baaef5a4dd413d9f3d98ff3714a76 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.b8y 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.b8y 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.b8y 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3c401d6c5e8f63effdb4830077df8c451199b4b1eb690a6a 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0qF 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3c401d6c5e8f63effdb4830077df8c451199b4b1eb690a6a 2 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3c401d6c5e8f63effdb4830077df8c451199b4b1eb690a6a 2 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3c401d6c5e8f63effdb4830077df8c451199b4b1eb690a6a 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:02.282 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0qF 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0qF 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.0qF 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5e22e97fe2888848faef463e0e6f8726b2dca297545e662e 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.NmW 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5e22e97fe2888848faef463e0e6f8726b2dca297545e662e 2 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5e22e97fe2888848faef463e0e6f8726b2dca297545e662e 2 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5e22e97fe2888848faef463e0e6f8726b2dca297545e662e 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.NmW 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.NmW 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.NmW 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c8b71e467b464fa22b7dd39bcb1d7845 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kQt 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c8b71e467b464fa22b7dd39bcb1d7845 1 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c8b71e467b464fa22b7dd39bcb1d7845 1 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c8b71e467b464fa22b7dd39bcb1d7845 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kQt 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kQt 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.kQt 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=903acb95691b25d9b873211b02fdc14a8ef3085e9d1f1cc5629bf62a0271dc31 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.e2C 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 903acb95691b25d9b873211b02fdc14a8ef3085e9d1f1cc5629bf62a0271dc31 3 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 903acb95691b25d9b873211b02fdc14a8ef3085e9d1f1cc5629bf62a0271dc31 3 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=903acb95691b25d9b873211b02fdc14a8ef3085e9d1f1cc5629bf62a0271dc31 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.e2C 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.e2C 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.e2C 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 76223 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76223 ']' 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.541 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.542 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.542 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.108 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.108 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:03.108 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76248 /var/tmp/host.sock 00:11:03.108 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76248 ']' 00:11:03.108 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:03.108 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.108 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:03.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:03.108 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.108 12:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GjL 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.GjL 00:11:03.367 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.GjL 00:11:03.636 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.QgV ]] 00:11:03.636 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QgV 00:11:03.636 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.636 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.636 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.636 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QgV 00:11:03.636 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QgV 00:11:03.894 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:03.894 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.b8y 00:11:03.894 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.894 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.894 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.894 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.b8y 00:11:03.894 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.b8y 00:11:04.229 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.0qF ]] 00:11:04.229 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0qF 00:11:04.229 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.229 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.229 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.229 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0qF 00:11:04.229 12:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0qF 00:11:04.500 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:04.500 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.NmW 00:11:04.500 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.500 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.500 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.500 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.NmW 00:11:04.500 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.NmW 00:11:04.759 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.kQt ]] 00:11:04.759 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kQt 00:11:04.759 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.759 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.759 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.759 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kQt 00:11:04.759 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kQt 00:11:05.019 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:05.019 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.e2C 00:11:05.019 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.019 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.019 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.019 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.e2C 00:11:05.019 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.e2C 00:11:05.277 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:05.277 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:05.277 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:05.277 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.277 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:05.277 12:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.535 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.793 00:11:05.793 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.793 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.793 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.052 { 00:11:06.052 "auth": { 00:11:06.052 "dhgroup": "null", 00:11:06.052 "digest": "sha256", 00:11:06.052 "state": "completed" 00:11:06.052 }, 00:11:06.052 "cntlid": 1, 00:11:06.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:06.052 "listen_address": { 00:11:06.052 "adrfam": "IPv4", 00:11:06.052 "traddr": "10.0.0.3", 00:11:06.052 "trsvcid": "4420", 00:11:06.052 "trtype": "TCP" 00:11:06.052 }, 00:11:06.052 "peer_address": { 00:11:06.052 "adrfam": "IPv4", 00:11:06.052 "traddr": "10.0.0.1", 00:11:06.052 "trsvcid": "60980", 00:11:06.052 "trtype": "TCP" 00:11:06.052 }, 00:11:06.052 "qid": 0, 00:11:06.052 "state": "enabled", 00:11:06.052 "thread": "nvmf_tgt_poll_group_000" 00:11:06.052 } 00:11:06.052 ]' 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.052 12:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.311 12:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:06.312 12:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.503 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.762 00:11:11.021 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:11.021 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:11.021 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.281 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.281 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.281 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.281 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.281 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.281 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.281 { 00:11:11.281 "auth": { 00:11:11.281 "dhgroup": "null", 00:11:11.281 "digest": "sha256", 00:11:11.281 "state": "completed" 00:11:11.281 }, 00:11:11.281 "cntlid": 3, 00:11:11.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:11.281 "listen_address": { 00:11:11.281 "adrfam": "IPv4", 00:11:11.281 "traddr": "10.0.0.3", 00:11:11.281 "trsvcid": "4420", 00:11:11.281 "trtype": "TCP" 00:11:11.281 }, 00:11:11.281 "peer_address": { 00:11:11.281 "adrfam": "IPv4", 00:11:11.281 "traddr": "10.0.0.1", 00:11:11.281 "trsvcid": "32776", 00:11:11.281 "trtype": "TCP" 00:11:11.281 }, 00:11:11.281 "qid": 0, 00:11:11.281 "state": "enabled", 00:11:11.281 "thread": "nvmf_tgt_poll_group_000" 00:11:11.281 } 00:11:11.281 ]' 00:11:11.281 12:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.281 12:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.281 12:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.281 12:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:11.281 12:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.281 12:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.281 12:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.281 12:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.540 12:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:11:11.540 12:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:11:12.494 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.494 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:12.494 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.494 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.494 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.494 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.494 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:12.494 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.752 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.011 00:11:13.011 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.011 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.011 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.269 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.269 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.269 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.269 12:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.269 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.269 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.269 { 00:11:13.269 "auth": { 00:11:13.269 "dhgroup": "null", 00:11:13.269 "digest": "sha256", 00:11:13.269 "state": "completed" 00:11:13.269 }, 00:11:13.269 "cntlid": 5, 00:11:13.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:13.269 "listen_address": { 00:11:13.269 "adrfam": "IPv4", 00:11:13.269 "traddr": "10.0.0.3", 00:11:13.269 "trsvcid": "4420", 00:11:13.269 "trtype": "TCP" 00:11:13.269 }, 00:11:13.269 "peer_address": { 00:11:13.269 "adrfam": "IPv4", 00:11:13.269 "traddr": "10.0.0.1", 00:11:13.269 "trsvcid": "32806", 00:11:13.269 "trtype": "TCP" 00:11:13.269 }, 00:11:13.269 "qid": 0, 00:11:13.269 "state": "enabled", 00:11:13.269 "thread": "nvmf_tgt_poll_group_000" 00:11:13.269 } 00:11:13.269 ]' 00:11:13.269 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.269 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.269 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.269 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:13.269 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.528 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.528 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.528 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.528 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:11:13.528 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:11:14.100 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.100 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:14.100 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.100 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.358 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.358 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.358 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:14.358 12:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:14.358 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:14.922 00:11:14.922 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.922 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.922 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.179 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.179 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.179 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.179 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.179 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.179 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.179 { 00:11:15.179 "auth": { 00:11:15.179 "dhgroup": "null", 00:11:15.179 "digest": "sha256", 00:11:15.179 "state": "completed" 00:11:15.179 }, 00:11:15.179 "cntlid": 7, 00:11:15.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:15.179 "listen_address": { 00:11:15.179 "adrfam": "IPv4", 00:11:15.179 "traddr": "10.0.0.3", 00:11:15.179 "trsvcid": "4420", 00:11:15.179 "trtype": "TCP" 00:11:15.179 }, 00:11:15.179 "peer_address": { 00:11:15.179 "adrfam": "IPv4", 00:11:15.179 "traddr": "10.0.0.1", 00:11:15.179 "trsvcid": "32838", 00:11:15.179 "trtype": "TCP" 00:11:15.179 }, 00:11:15.179 "qid": 0, 00:11:15.179 "state": "enabled", 00:11:15.179 "thread": "nvmf_tgt_poll_group_000" 00:11:15.179 } 00:11:15.179 ]' 00:11:15.179 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.179 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.179 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.179 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:15.179 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.179 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.179 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.179 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.436 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:11:15.436 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:11:16.001 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.001 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:16.001 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.001 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.001 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.001 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:16.001 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.001 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:16.001 12:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.260 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.518 00:11:16.777 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.777 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.777 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.036 { 00:11:17.036 "auth": { 00:11:17.036 "dhgroup": "ffdhe2048", 00:11:17.036 "digest": "sha256", 00:11:17.036 "state": "completed" 00:11:17.036 }, 00:11:17.036 "cntlid": 9, 00:11:17.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:17.036 "listen_address": { 00:11:17.036 "adrfam": "IPv4", 00:11:17.036 "traddr": "10.0.0.3", 00:11:17.036 "trsvcid": "4420", 00:11:17.036 "trtype": "TCP" 00:11:17.036 }, 00:11:17.036 "peer_address": { 00:11:17.036 "adrfam": "IPv4", 00:11:17.036 "traddr": "10.0.0.1", 00:11:17.036 "trsvcid": "42434", 00:11:17.036 "trtype": "TCP" 00:11:17.036 }, 00:11:17.036 "qid": 0, 00:11:17.036 "state": "enabled", 00:11:17.036 "thread": "nvmf_tgt_poll_group_000" 00:11:17.036 } 00:11:17.036 ]' 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.036 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.295 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:17.295 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:17.862 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.862 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:17.862 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.862 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.862 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.862 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.862 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:17.862 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.122 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.690 00:11:18.690 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.690 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.690 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.690 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.690 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.690 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.690 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.949 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.949 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.949 { 00:11:18.949 "auth": { 00:11:18.949 "dhgroup": "ffdhe2048", 00:11:18.949 "digest": "sha256", 00:11:18.949 "state": "completed" 00:11:18.949 }, 00:11:18.949 "cntlid": 11, 00:11:18.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:18.949 "listen_address": { 00:11:18.949 "adrfam": "IPv4", 00:11:18.949 "traddr": "10.0.0.3", 00:11:18.949 "trsvcid": "4420", 00:11:18.949 "trtype": "TCP" 00:11:18.949 }, 00:11:18.949 "peer_address": { 00:11:18.949 "adrfam": "IPv4", 00:11:18.949 "traddr": "10.0.0.1", 00:11:18.949 "trsvcid": "42474", 00:11:18.949 "trtype": "TCP" 00:11:18.949 }, 00:11:18.949 "qid": 0, 00:11:18.949 "state": "enabled", 00:11:18.949 "thread": "nvmf_tgt_poll_group_000" 00:11:18.949 } 00:11:18.949 ]' 00:11:18.949 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.949 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.949 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.949 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:18.949 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.949 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.949 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.949 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.209 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:11:19.209 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.146 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.146 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.146 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.146 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.146 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.714 00:11:20.714 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.714 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.714 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.973 { 00:11:20.973 "auth": { 00:11:20.973 "dhgroup": "ffdhe2048", 00:11:20.973 "digest": "sha256", 00:11:20.973 "state": "completed" 00:11:20.973 }, 00:11:20.973 "cntlid": 13, 00:11:20.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:20.973 "listen_address": { 00:11:20.973 "adrfam": "IPv4", 00:11:20.973 "traddr": "10.0.0.3", 00:11:20.973 "trsvcid": "4420", 00:11:20.973 "trtype": "TCP" 00:11:20.973 }, 00:11:20.973 "peer_address": { 00:11:20.973 "adrfam": "IPv4", 00:11:20.973 "traddr": "10.0.0.1", 00:11:20.973 "trsvcid": "42508", 00:11:20.973 "trtype": "TCP" 00:11:20.973 }, 00:11:20.973 "qid": 0, 00:11:20.973 "state": "enabled", 00:11:20.973 "thread": "nvmf_tgt_poll_group_000" 00:11:20.973 } 00:11:20.973 ]' 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.973 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.232 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:11:21.232 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:11:21.800 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.800 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:21.800 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.800 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.800 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.800 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.800 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:21.800 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:22.369 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:22.627 00:11:22.627 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.627 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.627 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.886 { 00:11:22.886 "auth": { 00:11:22.886 "dhgroup": "ffdhe2048", 00:11:22.886 "digest": "sha256", 00:11:22.886 "state": "completed" 00:11:22.886 }, 00:11:22.886 "cntlid": 15, 00:11:22.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:22.886 "listen_address": { 00:11:22.886 "adrfam": "IPv4", 00:11:22.886 "traddr": "10.0.0.3", 00:11:22.886 "trsvcid": "4420", 00:11:22.886 "trtype": "TCP" 00:11:22.886 }, 00:11:22.886 "peer_address": { 00:11:22.886 "adrfam": "IPv4", 00:11:22.886 "traddr": "10.0.0.1", 00:11:22.886 "trsvcid": "42536", 00:11:22.886 "trtype": "TCP" 00:11:22.886 }, 00:11:22.886 "qid": 0, 00:11:22.886 "state": "enabled", 00:11:22.886 "thread": "nvmf_tgt_poll_group_000" 00:11:22.886 } 00:11:22.886 ]' 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.886 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.145 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:11:23.145 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:24.083 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.342 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.342 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.342 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.342 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.342 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.343 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.343 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.601 00:11:24.601 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.601 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.601 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.860 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.860 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.860 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.860 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.860 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.860 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.860 { 00:11:24.860 "auth": { 00:11:24.860 "dhgroup": "ffdhe3072", 00:11:24.860 "digest": "sha256", 00:11:24.860 "state": "completed" 00:11:24.860 }, 00:11:24.860 "cntlid": 17, 00:11:24.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:24.860 "listen_address": { 00:11:24.860 "adrfam": "IPv4", 00:11:24.860 "traddr": "10.0.0.3", 00:11:24.860 "trsvcid": "4420", 00:11:24.860 "trtype": "TCP" 00:11:24.860 }, 00:11:24.860 "peer_address": { 00:11:24.860 "adrfam": "IPv4", 00:11:24.860 "traddr": "10.0.0.1", 00:11:24.860 "trsvcid": "42558", 00:11:24.860 "trtype": "TCP" 00:11:24.860 }, 00:11:24.860 "qid": 0, 00:11:24.860 "state": "enabled", 00:11:24.860 "thread": "nvmf_tgt_poll_group_000" 00:11:24.860 } 00:11:24.860 ]' 00:11:24.860 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.860 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.860 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.860 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:24.860 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.119 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.119 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.119 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.378 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:25.378 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:25.947 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.947 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:25.947 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.947 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.947 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.947 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.947 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:25.947 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:26.206 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:26.803 00:11:26.803 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.803 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.803 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.069 { 00:11:27.069 "auth": { 00:11:27.069 "dhgroup": "ffdhe3072", 00:11:27.069 "digest": "sha256", 00:11:27.069 "state": "completed" 00:11:27.069 }, 00:11:27.069 "cntlid": 19, 00:11:27.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:27.069 "listen_address": { 00:11:27.069 "adrfam": "IPv4", 00:11:27.069 "traddr": "10.0.0.3", 00:11:27.069 "trsvcid": "4420", 00:11:27.069 "trtype": "TCP" 00:11:27.069 }, 00:11:27.069 "peer_address": { 00:11:27.069 "adrfam": "IPv4", 00:11:27.069 "traddr": "10.0.0.1", 00:11:27.069 "trsvcid": "50270", 00:11:27.069 "trtype": "TCP" 00:11:27.069 }, 00:11:27.069 "qid": 0, 00:11:27.069 "state": "enabled", 00:11:27.069 "thread": "nvmf_tgt_poll_group_000" 00:11:27.069 } 00:11:27.069 ]' 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.069 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.328 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:11:27.328 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:11:27.896 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.896 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:27.896 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.896 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.896 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.896 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.896 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:27.896 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.155 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.723 00:11:28.723 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.723 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.723 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.982 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.983 { 00:11:28.983 "auth": { 00:11:28.983 "dhgroup": "ffdhe3072", 00:11:28.983 "digest": "sha256", 00:11:28.983 "state": "completed" 00:11:28.983 }, 00:11:28.983 "cntlid": 21, 00:11:28.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:28.983 "listen_address": { 00:11:28.983 "adrfam": "IPv4", 00:11:28.983 "traddr": "10.0.0.3", 00:11:28.983 "trsvcid": "4420", 00:11:28.983 "trtype": "TCP" 00:11:28.983 }, 00:11:28.983 "peer_address": { 00:11:28.983 "adrfam": "IPv4", 00:11:28.983 "traddr": "10.0.0.1", 00:11:28.983 "trsvcid": "50304", 00:11:28.983 "trtype": "TCP" 00:11:28.983 }, 00:11:28.983 "qid": 0, 00:11:28.983 "state": "enabled", 00:11:28.983 "thread": "nvmf_tgt_poll_group_000" 00:11:28.983 } 00:11:28.983 ]' 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.983 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.242 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:11:29.242 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:11:29.811 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.811 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:29.811 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.811 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.811 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.811 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.811 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:29.811 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:30.379 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:30.637 00:11:30.637 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.637 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.637 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.896 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.896 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.896 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.896 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.896 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.896 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.896 { 00:11:30.896 "auth": { 00:11:30.896 "dhgroup": "ffdhe3072", 00:11:30.896 "digest": "sha256", 00:11:30.896 "state": "completed" 00:11:30.896 }, 00:11:30.896 "cntlid": 23, 00:11:30.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:30.896 "listen_address": { 00:11:30.896 "adrfam": "IPv4", 00:11:30.896 "traddr": "10.0.0.3", 00:11:30.896 "trsvcid": "4420", 00:11:30.896 "trtype": "TCP" 00:11:30.896 }, 00:11:30.896 "peer_address": { 00:11:30.896 "adrfam": "IPv4", 00:11:30.896 "traddr": "10.0.0.1", 00:11:30.896 "trsvcid": "50324", 00:11:30.896 "trtype": "TCP" 00:11:30.896 }, 00:11:30.896 "qid": 0, 00:11:30.896 "state": "enabled", 00:11:30.896 "thread": "nvmf_tgt_poll_group_000" 00:11:30.896 } 00:11:30.896 ]' 00:11:30.896 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.896 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:30.896 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.896 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:30.896 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.153 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.153 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.153 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.411 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:11:31.411 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:11:31.976 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.976 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:31.976 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.976 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.976 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.976 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:31.976 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.976 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:31.976 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.235 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.802 00:11:32.802 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.802 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.802 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.061 { 00:11:33.061 "auth": { 00:11:33.061 "dhgroup": "ffdhe4096", 00:11:33.061 "digest": "sha256", 00:11:33.061 "state": "completed" 00:11:33.061 }, 00:11:33.061 "cntlid": 25, 00:11:33.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:33.061 "listen_address": { 00:11:33.061 "adrfam": "IPv4", 00:11:33.061 "traddr": "10.0.0.3", 00:11:33.061 "trsvcid": "4420", 00:11:33.061 "trtype": "TCP" 00:11:33.061 }, 00:11:33.061 "peer_address": { 00:11:33.061 "adrfam": "IPv4", 00:11:33.061 "traddr": "10.0.0.1", 00:11:33.061 "trsvcid": "50344", 00:11:33.061 "trtype": "TCP" 00:11:33.061 }, 00:11:33.061 "qid": 0, 00:11:33.061 "state": "enabled", 00:11:33.061 "thread": "nvmf_tgt_poll_group_000" 00:11:33.061 } 00:11:33.061 ]' 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.061 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.320 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:33.320 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:34.259 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.259 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:34.259 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.259 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.259 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.259 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.259 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:34.259 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.517 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.774 00:11:34.774 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.774 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.774 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.033 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.033 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.033 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.033 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.033 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.033 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.033 { 00:11:35.033 "auth": { 00:11:35.033 "dhgroup": "ffdhe4096", 00:11:35.033 "digest": "sha256", 00:11:35.033 "state": "completed" 00:11:35.033 }, 00:11:35.033 "cntlid": 27, 00:11:35.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:35.033 "listen_address": { 00:11:35.033 "adrfam": "IPv4", 00:11:35.033 "traddr": "10.0.0.3", 00:11:35.033 "trsvcid": "4420", 00:11:35.033 "trtype": "TCP" 00:11:35.033 }, 00:11:35.033 "peer_address": { 00:11:35.033 "adrfam": "IPv4", 00:11:35.033 "traddr": "10.0.0.1", 00:11:35.033 "trsvcid": "50370", 00:11:35.033 "trtype": "TCP" 00:11:35.033 }, 00:11:35.033 "qid": 0, 00:11:35.033 "state": "enabled", 00:11:35.033 "thread": "nvmf_tgt_poll_group_000" 00:11:35.033 } 00:11:35.033 ]' 00:11:35.033 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.033 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:35.033 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.291 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:35.291 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.291 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.291 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.291 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.549 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:11:35.549 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:11:36.116 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.116 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:36.116 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.116 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.116 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.116 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.116 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:36.116 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.373 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.938 00:11:36.938 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.938 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.938 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.195 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.195 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.195 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.195 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.195 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.195 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.195 { 00:11:37.195 "auth": { 00:11:37.195 "dhgroup": "ffdhe4096", 00:11:37.195 "digest": "sha256", 00:11:37.195 "state": "completed" 00:11:37.195 }, 00:11:37.195 "cntlid": 29, 00:11:37.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:37.195 "listen_address": { 00:11:37.195 "adrfam": "IPv4", 00:11:37.195 "traddr": "10.0.0.3", 00:11:37.195 "trsvcid": "4420", 00:11:37.195 "trtype": "TCP" 00:11:37.195 }, 00:11:37.195 "peer_address": { 00:11:37.195 "adrfam": "IPv4", 00:11:37.195 "traddr": "10.0.0.1", 00:11:37.195 "trsvcid": "52402", 00:11:37.195 "trtype": "TCP" 00:11:37.195 }, 00:11:37.195 "qid": 0, 00:11:37.195 "state": "enabled", 00:11:37.195 "thread": "nvmf_tgt_poll_group_000" 00:11:37.195 } 00:11:37.195 ]' 00:11:37.195 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.195 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.195 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.195 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:37.195 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.452 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.452 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.452 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.452 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:11:37.452 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:11:38.384 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.384 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:38.384 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.384 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.384 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.384 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.384 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:38.384 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:38.384 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:38.384 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.384 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:38.384 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:38.384 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:38.384 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.385 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:11:38.385 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.385 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.385 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.385 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:38.385 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:38.385 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:38.947 00:11:38.947 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.947 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.948 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.205 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.205 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.205 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.205 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.205 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.205 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.205 { 00:11:39.205 "auth": { 00:11:39.205 "dhgroup": "ffdhe4096", 00:11:39.205 "digest": "sha256", 00:11:39.205 "state": "completed" 00:11:39.205 }, 00:11:39.205 "cntlid": 31, 00:11:39.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:39.205 "listen_address": { 00:11:39.205 "adrfam": "IPv4", 00:11:39.205 "traddr": "10.0.0.3", 00:11:39.205 "trsvcid": "4420", 00:11:39.205 "trtype": "TCP" 00:11:39.205 }, 00:11:39.205 "peer_address": { 00:11:39.205 "adrfam": "IPv4", 00:11:39.205 "traddr": "10.0.0.1", 00:11:39.205 "trsvcid": "52428", 00:11:39.205 "trtype": "TCP" 00:11:39.205 }, 00:11:39.205 "qid": 0, 00:11:39.205 "state": "enabled", 00:11:39.205 "thread": "nvmf_tgt_poll_group_000" 00:11:39.205 } 00:11:39.205 ]' 00:11:39.205 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.205 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.205 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.205 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:39.205 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.205 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.205 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.205 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.463 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:11:39.463 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:11:40.398 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.398 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:40.398 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.398 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.398 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.398 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:40.398 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.398 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:40.398 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.656 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.914 00:11:41.172 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.172 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.172 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.439 { 00:11:41.439 "auth": { 00:11:41.439 "dhgroup": "ffdhe6144", 00:11:41.439 "digest": "sha256", 00:11:41.439 "state": "completed" 00:11:41.439 }, 00:11:41.439 "cntlid": 33, 00:11:41.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:41.439 "listen_address": { 00:11:41.439 "adrfam": "IPv4", 00:11:41.439 "traddr": "10.0.0.3", 00:11:41.439 "trsvcid": "4420", 00:11:41.439 "trtype": "TCP" 00:11:41.439 }, 00:11:41.439 "peer_address": { 00:11:41.439 "adrfam": "IPv4", 00:11:41.439 "traddr": "10.0.0.1", 00:11:41.439 "trsvcid": "52464", 00:11:41.439 "trtype": "TCP" 00:11:41.439 }, 00:11:41.439 "qid": 0, 00:11:41.439 "state": "enabled", 00:11:41.439 "thread": "nvmf_tgt_poll_group_000" 00:11:41.439 } 00:11:41.439 ]' 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.439 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.715 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:41.715 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.665 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.231 00:11:43.231 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.231 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.231 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.489 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.489 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.489 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.489 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.489 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.489 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.489 { 00:11:43.489 "auth": { 00:11:43.489 "dhgroup": "ffdhe6144", 00:11:43.489 "digest": "sha256", 00:11:43.489 "state": "completed" 00:11:43.489 }, 00:11:43.489 "cntlid": 35, 00:11:43.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:43.489 "listen_address": { 00:11:43.489 "adrfam": "IPv4", 00:11:43.489 "traddr": "10.0.0.3", 00:11:43.489 "trsvcid": "4420", 00:11:43.489 "trtype": "TCP" 00:11:43.489 }, 00:11:43.489 "peer_address": { 00:11:43.489 "adrfam": "IPv4", 00:11:43.489 "traddr": "10.0.0.1", 00:11:43.489 "trsvcid": "52500", 00:11:43.489 "trtype": "TCP" 00:11:43.489 }, 00:11:43.489 "qid": 0, 00:11:43.489 "state": "enabled", 00:11:43.489 "thread": "nvmf_tgt_poll_group_000" 00:11:43.489 } 00:11:43.489 ]' 00:11:43.489 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.489 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.489 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.490 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:43.490 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.490 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.490 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.490 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.054 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:11:44.054 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:11:44.620 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.620 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:44.620 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.620 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.620 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.620 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.620 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:44.620 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.878 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.136 00:11:45.394 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.394 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.394 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.652 { 00:11:45.652 "auth": { 00:11:45.652 "dhgroup": "ffdhe6144", 00:11:45.652 "digest": "sha256", 00:11:45.652 "state": "completed" 00:11:45.652 }, 00:11:45.652 "cntlid": 37, 00:11:45.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:45.652 "listen_address": { 00:11:45.652 "adrfam": "IPv4", 00:11:45.652 "traddr": "10.0.0.3", 00:11:45.652 "trsvcid": "4420", 00:11:45.652 "trtype": "TCP" 00:11:45.652 }, 00:11:45.652 "peer_address": { 00:11:45.652 "adrfam": "IPv4", 00:11:45.652 "traddr": "10.0.0.1", 00:11:45.652 "trsvcid": "52536", 00:11:45.652 "trtype": "TCP" 00:11:45.652 }, 00:11:45.652 "qid": 0, 00:11:45.652 "state": "enabled", 00:11:45.652 "thread": "nvmf_tgt_poll_group_000" 00:11:45.652 } 00:11:45.652 ]' 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.652 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.909 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:11:45.909 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:11:46.475 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.475 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:46.475 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.475 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.475 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.475 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.475 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:46.475 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:46.732 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:46.732 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.732 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:46.732 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:46.732 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:46.732 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.733 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:11:46.733 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.733 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.733 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.733 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:46.733 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:46.733 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:47.298 00:11:47.298 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.298 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.298 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.555 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.555 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.555 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.555 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.555 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.555 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.555 { 00:11:47.555 "auth": { 00:11:47.555 "dhgroup": "ffdhe6144", 00:11:47.555 "digest": "sha256", 00:11:47.555 "state": "completed" 00:11:47.555 }, 00:11:47.555 "cntlid": 39, 00:11:47.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:47.555 "listen_address": { 00:11:47.555 "adrfam": "IPv4", 00:11:47.555 "traddr": "10.0.0.3", 00:11:47.555 "trsvcid": "4420", 00:11:47.555 "trtype": "TCP" 00:11:47.555 }, 00:11:47.555 "peer_address": { 00:11:47.555 "adrfam": "IPv4", 00:11:47.555 "traddr": "10.0.0.1", 00:11:47.555 "trsvcid": "54282", 00:11:47.555 "trtype": "TCP" 00:11:47.555 }, 00:11:47.555 "qid": 0, 00:11:47.555 "state": "enabled", 00:11:47.555 "thread": "nvmf_tgt_poll_group_000" 00:11:47.555 } 00:11:47.555 ]' 00:11:47.555 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.555 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:47.555 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.813 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:47.813 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.813 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.813 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.813 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.072 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:11:48.072 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:11:48.639 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.639 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:48.639 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.639 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.639 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.639 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:48.639 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.639 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:48.639 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.207 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.775 00:11:49.775 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.775 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.775 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.035 { 00:11:50.035 "auth": { 00:11:50.035 "dhgroup": "ffdhe8192", 00:11:50.035 "digest": "sha256", 00:11:50.035 "state": "completed" 00:11:50.035 }, 00:11:50.035 "cntlid": 41, 00:11:50.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:50.035 "listen_address": { 00:11:50.035 "adrfam": "IPv4", 00:11:50.035 "traddr": "10.0.0.3", 00:11:50.035 "trsvcid": "4420", 00:11:50.035 "trtype": "TCP" 00:11:50.035 }, 00:11:50.035 "peer_address": { 00:11:50.035 "adrfam": "IPv4", 00:11:50.035 "traddr": "10.0.0.1", 00:11:50.035 "trsvcid": "54322", 00:11:50.035 "trtype": "TCP" 00:11:50.035 }, 00:11:50.035 "qid": 0, 00:11:50.035 "state": "enabled", 00:11:50.035 "thread": "nvmf_tgt_poll_group_000" 00:11:50.035 } 00:11:50.035 ]' 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.035 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.602 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:50.603 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:51.170 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.170 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:51.170 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.170 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.170 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.170 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.170 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:51.170 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.429 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.996 00:11:51.996 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.996 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.996 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.256 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.256 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.256 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.256 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.256 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.256 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.256 { 00:11:52.256 "auth": { 00:11:52.256 "dhgroup": "ffdhe8192", 00:11:52.256 "digest": "sha256", 00:11:52.256 "state": "completed" 00:11:52.256 }, 00:11:52.256 "cntlid": 43, 00:11:52.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:52.256 "listen_address": { 00:11:52.256 "adrfam": "IPv4", 00:11:52.256 "traddr": "10.0.0.3", 00:11:52.256 "trsvcid": "4420", 00:11:52.256 "trtype": "TCP" 00:11:52.256 }, 00:11:52.256 "peer_address": { 00:11:52.256 "adrfam": "IPv4", 00:11:52.256 "traddr": "10.0.0.1", 00:11:52.256 "trsvcid": "54354", 00:11:52.256 "trtype": "TCP" 00:11:52.256 }, 00:11:52.256 "qid": 0, 00:11:52.256 "state": "enabled", 00:11:52.256 "thread": "nvmf_tgt_poll_group_000" 00:11:52.256 } 00:11:52.256 ]' 00:11:52.256 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.256 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.256 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.256 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:52.256 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.256 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.256 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.256 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.515 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:11:52.515 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:11:53.452 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.452 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:53.452 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.452 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.452 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.452 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.452 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:53.452 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.711 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.278 00:11:54.278 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.278 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.278 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.537 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.537 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.537 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.537 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.537 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.537 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.537 { 00:11:54.537 "auth": { 00:11:54.537 "dhgroup": "ffdhe8192", 00:11:54.537 "digest": "sha256", 00:11:54.537 "state": "completed" 00:11:54.537 }, 00:11:54.537 "cntlid": 45, 00:11:54.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:54.537 "listen_address": { 00:11:54.537 "adrfam": "IPv4", 00:11:54.537 "traddr": "10.0.0.3", 00:11:54.537 "trsvcid": "4420", 00:11:54.537 "trtype": "TCP" 00:11:54.537 }, 00:11:54.537 "peer_address": { 00:11:54.537 "adrfam": "IPv4", 00:11:54.537 "traddr": "10.0.0.1", 00:11:54.537 "trsvcid": "54380", 00:11:54.537 "trtype": "TCP" 00:11:54.537 }, 00:11:54.537 "qid": 0, 00:11:54.537 "state": "enabled", 00:11:54.537 "thread": "nvmf_tgt_poll_group_000" 00:11:54.537 } 00:11:54.537 ]' 00:11:54.537 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.537 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:54.537 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.537 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:54.537 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.795 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.795 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.795 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.052 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:11:55.052 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:11:55.618 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.618 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:55.618 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.618 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.618 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.618 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.618 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:55.618 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:55.877 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.443 00:11:56.443 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.443 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.443 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.702 { 00:11:56.702 "auth": { 00:11:56.702 "dhgroup": "ffdhe8192", 00:11:56.702 "digest": "sha256", 00:11:56.702 "state": "completed" 00:11:56.702 }, 00:11:56.702 "cntlid": 47, 00:11:56.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:56.702 "listen_address": { 00:11:56.702 "adrfam": "IPv4", 00:11:56.702 "traddr": "10.0.0.3", 00:11:56.702 "trsvcid": "4420", 00:11:56.702 "trtype": "TCP" 00:11:56.702 }, 00:11:56.702 "peer_address": { 00:11:56.702 "adrfam": "IPv4", 00:11:56.702 "traddr": "10.0.0.1", 00:11:56.702 "trsvcid": "54606", 00:11:56.702 "trtype": "TCP" 00:11:56.702 }, 00:11:56.702 "qid": 0, 00:11:56.702 "state": "enabled", 00:11:56.702 "thread": "nvmf_tgt_poll_group_000" 00:11:56.702 } 00:11:56.702 ]' 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.702 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.960 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:11:56.960 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:11:57.529 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.529 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:57.529 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.529 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.529 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.529 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:57.529 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:57.529 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.530 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:57.530 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:57.788 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:58.047 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.047 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:58.047 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:58.047 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:58.047 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.047 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.047 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.047 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.047 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.047 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.047 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.047 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.305 00:11:58.305 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.305 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.305 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.564 { 00:11:58.564 "auth": { 00:11:58.564 "dhgroup": "null", 00:11:58.564 "digest": "sha384", 00:11:58.564 "state": "completed" 00:11:58.564 }, 00:11:58.564 "cntlid": 49, 00:11:58.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:11:58.564 "listen_address": { 00:11:58.564 "adrfam": "IPv4", 00:11:58.564 "traddr": "10.0.0.3", 00:11:58.564 "trsvcid": "4420", 00:11:58.564 "trtype": "TCP" 00:11:58.564 }, 00:11:58.564 "peer_address": { 00:11:58.564 "adrfam": "IPv4", 00:11:58.564 "traddr": "10.0.0.1", 00:11:58.564 "trsvcid": "54636", 00:11:58.564 "trtype": "TCP" 00:11:58.564 }, 00:11:58.564 "qid": 0, 00:11:58.564 "state": "enabled", 00:11:58.564 "thread": "nvmf_tgt_poll_group_000" 00:11:58.564 } 00:11:58.564 ]' 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.564 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.146 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:59.146 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:11:59.714 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.714 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:11:59.714 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.714 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.714 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.714 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.714 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:59.714 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.974 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.233 00:12:00.491 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.491 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.491 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.491 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.491 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.491 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.491 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.751 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.751 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.751 { 00:12:00.751 "auth": { 00:12:00.751 "dhgroup": "null", 00:12:00.751 "digest": "sha384", 00:12:00.751 "state": "completed" 00:12:00.751 }, 00:12:00.751 "cntlid": 51, 00:12:00.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:00.751 "listen_address": { 00:12:00.751 "adrfam": "IPv4", 00:12:00.751 "traddr": "10.0.0.3", 00:12:00.751 "trsvcid": "4420", 00:12:00.751 "trtype": "TCP" 00:12:00.751 }, 00:12:00.751 "peer_address": { 00:12:00.751 "adrfam": "IPv4", 00:12:00.751 "traddr": "10.0.0.1", 00:12:00.751 "trsvcid": "54660", 00:12:00.751 "trtype": "TCP" 00:12:00.751 }, 00:12:00.751 "qid": 0, 00:12:00.751 "state": "enabled", 00:12:00.751 "thread": "nvmf_tgt_poll_group_000" 00:12:00.751 } 00:12:00.751 ]' 00:12:00.751 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.751 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.751 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.751 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:00.751 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.751 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.751 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.751 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.014 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:01.014 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:01.578 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.579 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:01.579 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.579 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.579 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.579 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.579 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:01.579 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.143 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.400 00:12:02.400 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.400 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.400 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.657 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.657 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.657 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.657 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.657 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.657 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.657 { 00:12:02.657 "auth": { 00:12:02.657 "dhgroup": "null", 00:12:02.657 "digest": "sha384", 00:12:02.658 "state": "completed" 00:12:02.658 }, 00:12:02.658 "cntlid": 53, 00:12:02.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:02.658 "listen_address": { 00:12:02.658 "adrfam": "IPv4", 00:12:02.658 "traddr": "10.0.0.3", 00:12:02.658 "trsvcid": "4420", 00:12:02.658 "trtype": "TCP" 00:12:02.658 }, 00:12:02.658 "peer_address": { 00:12:02.658 "adrfam": "IPv4", 00:12:02.658 "traddr": "10.0.0.1", 00:12:02.658 "trsvcid": "54692", 00:12:02.658 "trtype": "TCP" 00:12:02.658 }, 00:12:02.658 "qid": 0, 00:12:02.658 "state": "enabled", 00:12:02.658 "thread": "nvmf_tgt_poll_group_000" 00:12:02.658 } 00:12:02.658 ]' 00:12:02.658 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.658 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.658 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.658 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:02.658 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.658 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.658 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.658 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.970 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:02.970 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.901 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.159 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.159 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:04.159 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:04.159 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:04.416 00:12:04.416 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.416 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.416 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.672 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.672 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.672 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.672 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.672 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.672 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.672 { 00:12:04.672 "auth": { 00:12:04.672 "dhgroup": "null", 00:12:04.672 "digest": "sha384", 00:12:04.672 "state": "completed" 00:12:04.672 }, 00:12:04.672 "cntlid": 55, 00:12:04.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:04.672 "listen_address": { 00:12:04.672 "adrfam": "IPv4", 00:12:04.672 "traddr": "10.0.0.3", 00:12:04.672 "trsvcid": "4420", 00:12:04.672 "trtype": "TCP" 00:12:04.672 }, 00:12:04.672 "peer_address": { 00:12:04.672 "adrfam": "IPv4", 00:12:04.672 "traddr": "10.0.0.1", 00:12:04.672 "trsvcid": "54722", 00:12:04.672 "trtype": "TCP" 00:12:04.672 }, 00:12:04.672 "qid": 0, 00:12:04.672 "state": "enabled", 00:12:04.672 "thread": "nvmf_tgt_poll_group_000" 00:12:04.672 } 00:12:04.672 ]' 00:12:04.672 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.672 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.672 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.672 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:04.672 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.929 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.929 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.930 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.187 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:05.187 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:05.750 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.750 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:05.750 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.750 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.750 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.750 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:05.750 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.750 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:05.750 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.008 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.267 00:12:06.267 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.267 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.267 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.526 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.526 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.526 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.526 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.526 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.526 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.526 { 00:12:06.526 "auth": { 00:12:06.526 "dhgroup": "ffdhe2048", 00:12:06.526 "digest": "sha384", 00:12:06.526 "state": "completed" 00:12:06.526 }, 00:12:06.526 "cntlid": 57, 00:12:06.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:06.526 "listen_address": { 00:12:06.526 "adrfam": "IPv4", 00:12:06.526 "traddr": "10.0.0.3", 00:12:06.526 "trsvcid": "4420", 00:12:06.526 "trtype": "TCP" 00:12:06.526 }, 00:12:06.526 "peer_address": { 00:12:06.526 "adrfam": "IPv4", 00:12:06.526 "traddr": "10.0.0.1", 00:12:06.526 "trsvcid": "60546", 00:12:06.526 "trtype": "TCP" 00:12:06.526 }, 00:12:06.526 "qid": 0, 00:12:06.526 "state": "enabled", 00:12:06.526 "thread": "nvmf_tgt_poll_group_000" 00:12:06.526 } 00:12:06.526 ]' 00:12:06.526 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.785 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.785 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.785 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:06.785 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.785 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.785 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.785 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.043 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:07.043 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:07.609 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.609 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:07.609 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.609 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.609 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.609 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.609 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:07.609 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:07.867 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:07.867 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.868 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:07.868 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:07.868 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:07.868 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.868 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.868 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.868 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.868 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.868 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.868 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.868 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.126 00:12:08.384 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.384 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.384 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.641 { 00:12:08.641 "auth": { 00:12:08.641 "dhgroup": "ffdhe2048", 00:12:08.641 "digest": "sha384", 00:12:08.641 "state": "completed" 00:12:08.641 }, 00:12:08.641 "cntlid": 59, 00:12:08.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:08.641 "listen_address": { 00:12:08.641 "adrfam": "IPv4", 00:12:08.641 "traddr": "10.0.0.3", 00:12:08.641 "trsvcid": "4420", 00:12:08.641 "trtype": "TCP" 00:12:08.641 }, 00:12:08.641 "peer_address": { 00:12:08.641 "adrfam": "IPv4", 00:12:08.641 "traddr": "10.0.0.1", 00:12:08.641 "trsvcid": "60568", 00:12:08.641 "trtype": "TCP" 00:12:08.641 }, 00:12:08.641 "qid": 0, 00:12:08.641 "state": "enabled", 00:12:08.641 "thread": "nvmf_tgt_poll_group_000" 00:12:08.641 } 00:12:08.641 ]' 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.641 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.899 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:08.899 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.835 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.094 00:12:10.352 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.352 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.352 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.610 { 00:12:10.610 "auth": { 00:12:10.610 "dhgroup": "ffdhe2048", 00:12:10.610 "digest": "sha384", 00:12:10.610 "state": "completed" 00:12:10.610 }, 00:12:10.610 "cntlid": 61, 00:12:10.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:10.610 "listen_address": { 00:12:10.610 "adrfam": "IPv4", 00:12:10.610 "traddr": "10.0.0.3", 00:12:10.610 "trsvcid": "4420", 00:12:10.610 "trtype": "TCP" 00:12:10.610 }, 00:12:10.610 "peer_address": { 00:12:10.610 "adrfam": "IPv4", 00:12:10.610 "traddr": "10.0.0.1", 00:12:10.610 "trsvcid": "60598", 00:12:10.610 "trtype": "TCP" 00:12:10.610 }, 00:12:10.610 "qid": 0, 00:12:10.610 "state": "enabled", 00:12:10.610 "thread": "nvmf_tgt_poll_group_000" 00:12:10.610 } 00:12:10.610 ]' 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.610 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.869 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:10.869 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:11.805 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.805 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:11.805 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.805 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.805 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.805 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.806 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:11.806 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:12.065 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:12.324 00:12:12.324 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.324 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.324 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.583 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.583 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.583 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.583 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.583 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.583 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.583 { 00:12:12.583 "auth": { 00:12:12.583 "dhgroup": "ffdhe2048", 00:12:12.583 "digest": "sha384", 00:12:12.583 "state": "completed" 00:12:12.583 }, 00:12:12.583 "cntlid": 63, 00:12:12.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:12.583 "listen_address": { 00:12:12.583 "adrfam": "IPv4", 00:12:12.583 "traddr": "10.0.0.3", 00:12:12.583 "trsvcid": "4420", 00:12:12.583 "trtype": "TCP" 00:12:12.583 }, 00:12:12.583 "peer_address": { 00:12:12.583 "adrfam": "IPv4", 00:12:12.583 "traddr": "10.0.0.1", 00:12:12.583 "trsvcid": "60626", 00:12:12.583 "trtype": "TCP" 00:12:12.583 }, 00:12:12.583 "qid": 0, 00:12:12.583 "state": "enabled", 00:12:12.583 "thread": "nvmf_tgt_poll_group_000" 00:12:12.583 } 00:12:12.583 ]' 00:12:12.583 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.841 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.841 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.841 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:12.841 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.841 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.841 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.841 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.100 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:13.100 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:13.666 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.666 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:13.666 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.666 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.666 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.667 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.667 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.667 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:13.667 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.925 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.204 00:12:14.462 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.462 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.462 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.462 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.462 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.462 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.462 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.719 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.719 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.719 { 00:12:14.719 "auth": { 00:12:14.719 "dhgroup": "ffdhe3072", 00:12:14.719 "digest": "sha384", 00:12:14.719 "state": "completed" 00:12:14.719 }, 00:12:14.719 "cntlid": 65, 00:12:14.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:14.719 "listen_address": { 00:12:14.719 "adrfam": "IPv4", 00:12:14.719 "traddr": "10.0.0.3", 00:12:14.719 "trsvcid": "4420", 00:12:14.719 "trtype": "TCP" 00:12:14.719 }, 00:12:14.719 "peer_address": { 00:12:14.719 "adrfam": "IPv4", 00:12:14.719 "traddr": "10.0.0.1", 00:12:14.719 "trsvcid": "60642", 00:12:14.719 "trtype": "TCP" 00:12:14.719 }, 00:12:14.719 "qid": 0, 00:12:14.719 "state": "enabled", 00:12:14.719 "thread": "nvmf_tgt_poll_group_000" 00:12:14.719 } 00:12:14.719 ]' 00:12:14.719 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.719 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.719 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.719 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:14.719 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.719 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.719 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.719 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.989 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:14.989 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:15.630 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.630 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:15.630 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.630 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.630 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.630 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.630 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:15.630 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:15.888 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:15.888 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.888 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:15.888 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:15.888 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:15.888 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.888 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.888 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.888 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.888 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.888 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.889 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.889 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.147 00:12:16.147 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.147 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.147 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.406 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.406 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.406 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.406 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.406 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.406 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.406 { 00:12:16.406 "auth": { 00:12:16.406 "dhgroup": "ffdhe3072", 00:12:16.406 "digest": "sha384", 00:12:16.406 "state": "completed" 00:12:16.406 }, 00:12:16.406 "cntlid": 67, 00:12:16.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:16.406 "listen_address": { 00:12:16.406 "adrfam": "IPv4", 00:12:16.406 "traddr": "10.0.0.3", 00:12:16.406 "trsvcid": "4420", 00:12:16.406 "trtype": "TCP" 00:12:16.406 }, 00:12:16.406 "peer_address": { 00:12:16.406 "adrfam": "IPv4", 00:12:16.406 "traddr": "10.0.0.1", 00:12:16.406 "trsvcid": "48270", 00:12:16.406 "trtype": "TCP" 00:12:16.406 }, 00:12:16.406 "qid": 0, 00:12:16.406 "state": "enabled", 00:12:16.406 "thread": "nvmf_tgt_poll_group_000" 00:12:16.406 } 00:12:16.406 ]' 00:12:16.406 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.665 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.665 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.665 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:16.665 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.665 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.665 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.665 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.924 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:16.924 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:17.491 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.491 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:17.491 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.491 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.491 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.491 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.491 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:17.491 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.750 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.009 00:12:18.269 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.269 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.269 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.540 { 00:12:18.540 "auth": { 00:12:18.540 "dhgroup": "ffdhe3072", 00:12:18.540 "digest": "sha384", 00:12:18.540 "state": "completed" 00:12:18.540 }, 00:12:18.540 "cntlid": 69, 00:12:18.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:18.540 "listen_address": { 00:12:18.540 "adrfam": "IPv4", 00:12:18.540 "traddr": "10.0.0.3", 00:12:18.540 "trsvcid": "4420", 00:12:18.540 "trtype": "TCP" 00:12:18.540 }, 00:12:18.540 "peer_address": { 00:12:18.540 "adrfam": "IPv4", 00:12:18.540 "traddr": "10.0.0.1", 00:12:18.540 "trsvcid": "48282", 00:12:18.540 "trtype": "TCP" 00:12:18.540 }, 00:12:18.540 "qid": 0, 00:12:18.540 "state": "enabled", 00:12:18.540 "thread": "nvmf_tgt_poll_group_000" 00:12:18.540 } 00:12:18.540 ]' 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.540 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.817 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:18.817 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:19.384 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.385 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:19.385 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.385 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.385 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.385 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.385 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:19.385 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:19.644 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.212 00:12:20.212 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.212 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.212 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.471 { 00:12:20.471 "auth": { 00:12:20.471 "dhgroup": "ffdhe3072", 00:12:20.471 "digest": "sha384", 00:12:20.471 "state": "completed" 00:12:20.471 }, 00:12:20.471 "cntlid": 71, 00:12:20.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:20.471 "listen_address": { 00:12:20.471 "adrfam": "IPv4", 00:12:20.471 "traddr": "10.0.0.3", 00:12:20.471 "trsvcid": "4420", 00:12:20.471 "trtype": "TCP" 00:12:20.471 }, 00:12:20.471 "peer_address": { 00:12:20.471 "adrfam": "IPv4", 00:12:20.471 "traddr": "10.0.0.1", 00:12:20.471 "trsvcid": "48320", 00:12:20.471 "trtype": "TCP" 00:12:20.471 }, 00:12:20.471 "qid": 0, 00:12:20.471 "state": "enabled", 00:12:20.471 "thread": "nvmf_tgt_poll_group_000" 00:12:20.471 } 00:12:20.471 ]' 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.471 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.040 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:21.040 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:21.607 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.607 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:21.607 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.607 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.607 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.607 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:21.607 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.607 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:21.607 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.866 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.125 00:12:22.125 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.125 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.125 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.383 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.383 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.383 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.383 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.383 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.383 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.383 { 00:12:22.383 "auth": { 00:12:22.383 "dhgroup": "ffdhe4096", 00:12:22.383 "digest": "sha384", 00:12:22.383 "state": "completed" 00:12:22.383 }, 00:12:22.383 "cntlid": 73, 00:12:22.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:22.384 "listen_address": { 00:12:22.384 "adrfam": "IPv4", 00:12:22.384 "traddr": "10.0.0.3", 00:12:22.384 "trsvcid": "4420", 00:12:22.384 "trtype": "TCP" 00:12:22.384 }, 00:12:22.384 "peer_address": { 00:12:22.384 "adrfam": "IPv4", 00:12:22.384 "traddr": "10.0.0.1", 00:12:22.384 "trsvcid": "48342", 00:12:22.384 "trtype": "TCP" 00:12:22.384 }, 00:12:22.384 "qid": 0, 00:12:22.384 "state": "enabled", 00:12:22.384 "thread": "nvmf_tgt_poll_group_000" 00:12:22.384 } 00:12:22.384 ]' 00:12:22.384 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.384 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:22.384 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.643 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:22.643 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.643 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.643 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.643 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.903 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:22.903 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:23.470 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.470 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:23.470 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.470 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.729 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.729 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.729 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:23.729 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.987 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.246 00:12:24.246 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.246 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.246 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.811 { 00:12:24.811 "auth": { 00:12:24.811 "dhgroup": "ffdhe4096", 00:12:24.811 "digest": "sha384", 00:12:24.811 "state": "completed" 00:12:24.811 }, 00:12:24.811 "cntlid": 75, 00:12:24.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:24.811 "listen_address": { 00:12:24.811 "adrfam": "IPv4", 00:12:24.811 "traddr": "10.0.0.3", 00:12:24.811 "trsvcid": "4420", 00:12:24.811 "trtype": "TCP" 00:12:24.811 }, 00:12:24.811 "peer_address": { 00:12:24.811 "adrfam": "IPv4", 00:12:24.811 "traddr": "10.0.0.1", 00:12:24.811 "trsvcid": "48376", 00:12:24.811 "trtype": "TCP" 00:12:24.811 }, 00:12:24.811 "qid": 0, 00:12:24.811 "state": "enabled", 00:12:24.811 "thread": "nvmf_tgt_poll_group_000" 00:12:24.811 } 00:12:24.811 ]' 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.811 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.069 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:25.069 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:25.635 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.635 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:25.635 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.635 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.635 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.635 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.635 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:25.635 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:25.892 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:25.892 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.892 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:25.892 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:25.892 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:25.892 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.892 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.892 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.892 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.892 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.893 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.893 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.893 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.463 00:12:26.463 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.463 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.463 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.722 { 00:12:26.722 "auth": { 00:12:26.722 "dhgroup": "ffdhe4096", 00:12:26.722 "digest": "sha384", 00:12:26.722 "state": "completed" 00:12:26.722 }, 00:12:26.722 "cntlid": 77, 00:12:26.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:26.722 "listen_address": { 00:12:26.722 "adrfam": "IPv4", 00:12:26.722 "traddr": "10.0.0.3", 00:12:26.722 "trsvcid": "4420", 00:12:26.722 "trtype": "TCP" 00:12:26.722 }, 00:12:26.722 "peer_address": { 00:12:26.722 "adrfam": "IPv4", 00:12:26.722 "traddr": "10.0.0.1", 00:12:26.722 "trsvcid": "37348", 00:12:26.722 "trtype": "TCP" 00:12:26.722 }, 00:12:26.722 "qid": 0, 00:12:26.722 "state": "enabled", 00:12:26.722 "thread": "nvmf_tgt_poll_group_000" 00:12:26.722 } 00:12:26.722 ]' 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.722 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.980 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:26.980 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:27.916 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.916 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:27.916 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.916 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.916 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.916 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.916 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:27.916 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.174 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.432 00:12:28.432 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.432 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.432 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.691 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.691 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.691 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.691 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.691 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.691 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.691 { 00:12:28.691 "auth": { 00:12:28.691 "dhgroup": "ffdhe4096", 00:12:28.691 "digest": "sha384", 00:12:28.691 "state": "completed" 00:12:28.691 }, 00:12:28.691 "cntlid": 79, 00:12:28.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:28.691 "listen_address": { 00:12:28.691 "adrfam": "IPv4", 00:12:28.691 "traddr": "10.0.0.3", 00:12:28.691 "trsvcid": "4420", 00:12:28.691 "trtype": "TCP" 00:12:28.691 }, 00:12:28.691 "peer_address": { 00:12:28.691 "adrfam": "IPv4", 00:12:28.691 "traddr": "10.0.0.1", 00:12:28.691 "trsvcid": "37378", 00:12:28.691 "trtype": "TCP" 00:12:28.691 }, 00:12:28.691 "qid": 0, 00:12:28.691 "state": "enabled", 00:12:28.691 "thread": "nvmf_tgt_poll_group_000" 00:12:28.691 } 00:12:28.691 ]' 00:12:28.691 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.950 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.950 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.950 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:28.950 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.950 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.950 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.950 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.209 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:29.209 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:29.775 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.775 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:29.775 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.775 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.775 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.775 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:29.775 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.775 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:29.775 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.342 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.601 00:12:30.601 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.601 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.601 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.888 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.888 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.888 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.888 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.888 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.888 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.888 { 00:12:30.888 "auth": { 00:12:30.888 "dhgroup": "ffdhe6144", 00:12:30.888 "digest": "sha384", 00:12:30.888 "state": "completed" 00:12:30.888 }, 00:12:30.888 "cntlid": 81, 00:12:30.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:30.888 "listen_address": { 00:12:30.888 "adrfam": "IPv4", 00:12:30.888 "traddr": "10.0.0.3", 00:12:30.888 "trsvcid": "4420", 00:12:30.888 "trtype": "TCP" 00:12:30.888 }, 00:12:30.888 "peer_address": { 00:12:30.888 "adrfam": "IPv4", 00:12:30.888 "traddr": "10.0.0.1", 00:12:30.888 "trsvcid": "37416", 00:12:30.888 "trtype": "TCP" 00:12:30.888 }, 00:12:30.888 "qid": 0, 00:12:30.888 "state": "enabled", 00:12:30.888 "thread": "nvmf_tgt_poll_group_000" 00:12:30.888 } 00:12:30.888 ]' 00:12:30.888 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.888 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.888 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.888 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:30.888 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.146 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.146 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.146 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.146 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:31.146 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:32.081 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.081 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:32.081 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.081 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.081 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.081 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.081 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:32.081 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.340 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.607 00:12:32.971 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.971 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.971 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.971 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.971 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.971 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.971 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.971 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.971 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.971 { 00:12:32.971 "auth": { 00:12:32.971 "dhgroup": "ffdhe6144", 00:12:32.971 "digest": "sha384", 00:12:32.971 "state": "completed" 00:12:32.971 }, 00:12:32.971 "cntlid": 83, 00:12:32.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:32.971 "listen_address": { 00:12:32.971 "adrfam": "IPv4", 00:12:32.971 "traddr": "10.0.0.3", 00:12:32.971 "trsvcid": "4420", 00:12:32.971 "trtype": "TCP" 00:12:32.971 }, 00:12:32.971 "peer_address": { 00:12:32.971 "adrfam": "IPv4", 00:12:32.971 "traddr": "10.0.0.1", 00:12:32.971 "trsvcid": "37440", 00:12:32.971 "trtype": "TCP" 00:12:32.971 }, 00:12:32.971 "qid": 0, 00:12:32.971 "state": "enabled", 00:12:32.971 "thread": "nvmf_tgt_poll_group_000" 00:12:32.971 } 00:12:32.971 ]' 00:12:32.971 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.229 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.229 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.229 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:33.229 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.229 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.229 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.229 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.508 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:33.508 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:34.076 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.077 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:34.077 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.077 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.077 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.077 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.077 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:34.077 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.352 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.916 00:12:34.916 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.916 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.916 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.174 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.174 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.174 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.174 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.174 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.174 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.174 { 00:12:35.174 "auth": { 00:12:35.174 "dhgroup": "ffdhe6144", 00:12:35.174 "digest": "sha384", 00:12:35.174 "state": "completed" 00:12:35.174 }, 00:12:35.174 "cntlid": 85, 00:12:35.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:35.174 "listen_address": { 00:12:35.174 "adrfam": "IPv4", 00:12:35.174 "traddr": "10.0.0.3", 00:12:35.174 "trsvcid": "4420", 00:12:35.174 "trtype": "TCP" 00:12:35.174 }, 00:12:35.174 "peer_address": { 00:12:35.174 "adrfam": "IPv4", 00:12:35.174 "traddr": "10.0.0.1", 00:12:35.174 "trsvcid": "37458", 00:12:35.174 "trtype": "TCP" 00:12:35.174 }, 00:12:35.174 "qid": 0, 00:12:35.174 "state": "enabled", 00:12:35.174 "thread": "nvmf_tgt_poll_group_000" 00:12:35.174 } 00:12:35.174 ]' 00:12:35.174 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.174 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.174 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.432 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:35.432 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.432 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.432 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.432 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.691 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:35.691 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:36.260 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.260 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:36.260 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.260 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.260 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.260 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.260 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:36.260 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:36.518 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:36.776 00:12:36.776 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.776 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.776 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.034 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.292 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.292 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.292 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.292 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.292 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.292 { 00:12:37.292 "auth": { 00:12:37.292 "dhgroup": "ffdhe6144", 00:12:37.292 "digest": "sha384", 00:12:37.292 "state": "completed" 00:12:37.292 }, 00:12:37.292 "cntlid": 87, 00:12:37.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:37.292 "listen_address": { 00:12:37.292 "adrfam": "IPv4", 00:12:37.292 "traddr": "10.0.0.3", 00:12:37.292 "trsvcid": "4420", 00:12:37.292 "trtype": "TCP" 00:12:37.292 }, 00:12:37.292 "peer_address": { 00:12:37.292 "adrfam": "IPv4", 00:12:37.292 "traddr": "10.0.0.1", 00:12:37.292 "trsvcid": "44770", 00:12:37.292 "trtype": "TCP" 00:12:37.292 }, 00:12:37.292 "qid": 0, 00:12:37.292 "state": "enabled", 00:12:37.292 "thread": "nvmf_tgt_poll_group_000" 00:12:37.292 } 00:12:37.292 ]' 00:12:37.292 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.292 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:37.292 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.292 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:37.292 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.292 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.292 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.292 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.552 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:37.552 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:38.120 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.120 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:38.120 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.120 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.120 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.120 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:38.120 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.120 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:38.120 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.379 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.947 00:12:38.947 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.947 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.947 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.207 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.207 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.207 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.207 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.207 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.207 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.207 { 00:12:39.207 "auth": { 00:12:39.207 "dhgroup": "ffdhe8192", 00:12:39.207 "digest": "sha384", 00:12:39.207 "state": "completed" 00:12:39.207 }, 00:12:39.207 "cntlid": 89, 00:12:39.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:39.207 "listen_address": { 00:12:39.207 "adrfam": "IPv4", 00:12:39.207 "traddr": "10.0.0.3", 00:12:39.207 "trsvcid": "4420", 00:12:39.207 "trtype": "TCP" 00:12:39.207 }, 00:12:39.207 "peer_address": { 00:12:39.207 "adrfam": "IPv4", 00:12:39.207 "traddr": "10.0.0.1", 00:12:39.207 "trsvcid": "44796", 00:12:39.207 "trtype": "TCP" 00:12:39.207 }, 00:12:39.207 "qid": 0, 00:12:39.207 "state": "enabled", 00:12:39.207 "thread": "nvmf_tgt_poll_group_000" 00:12:39.207 } 00:12:39.207 ]' 00:12:39.207 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.207 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:39.207 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.207 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:39.207 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.466 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.466 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.466 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.466 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:39.466 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:40.033 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.033 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:40.033 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.033 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.033 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.033 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.033 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:40.033 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:40.291 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:40.292 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.292 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:40.292 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:40.292 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:40.292 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.292 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.292 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.292 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.550 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.550 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.550 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.550 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.117 00:12:41.117 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.117 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.117 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.376 { 00:12:41.376 "auth": { 00:12:41.376 "dhgroup": "ffdhe8192", 00:12:41.376 "digest": "sha384", 00:12:41.376 "state": "completed" 00:12:41.376 }, 00:12:41.376 "cntlid": 91, 00:12:41.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:41.376 "listen_address": { 00:12:41.376 "adrfam": "IPv4", 00:12:41.376 "traddr": "10.0.0.3", 00:12:41.376 "trsvcid": "4420", 00:12:41.376 "trtype": "TCP" 00:12:41.376 }, 00:12:41.376 "peer_address": { 00:12:41.376 "adrfam": "IPv4", 00:12:41.376 "traddr": "10.0.0.1", 00:12:41.376 "trsvcid": "44820", 00:12:41.376 "trtype": "TCP" 00:12:41.376 }, 00:12:41.376 "qid": 0, 00:12:41.376 "state": "enabled", 00:12:41.376 "thread": "nvmf_tgt_poll_group_000" 00:12:41.376 } 00:12:41.376 ]' 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.376 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.634 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:41.635 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.571 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.138 00:12:43.138 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.138 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.138 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.706 { 00:12:43.706 "auth": { 00:12:43.706 "dhgroup": "ffdhe8192", 00:12:43.706 "digest": "sha384", 00:12:43.706 "state": "completed" 00:12:43.706 }, 00:12:43.706 "cntlid": 93, 00:12:43.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:43.706 "listen_address": { 00:12:43.706 "adrfam": "IPv4", 00:12:43.706 "traddr": "10.0.0.3", 00:12:43.706 "trsvcid": "4420", 00:12:43.706 "trtype": "TCP" 00:12:43.706 }, 00:12:43.706 "peer_address": { 00:12:43.706 "adrfam": "IPv4", 00:12:43.706 "traddr": "10.0.0.1", 00:12:43.706 "trsvcid": "44852", 00:12:43.706 "trtype": "TCP" 00:12:43.706 }, 00:12:43.706 "qid": 0, 00:12:43.706 "state": "enabled", 00:12:43.706 "thread": "nvmf_tgt_poll_group_000" 00:12:43.706 } 00:12:43.706 ]' 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.706 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.965 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:43.965 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:44.900 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:45.469 00:12:45.469 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.469 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.469 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.037 { 00:12:46.037 "auth": { 00:12:46.037 "dhgroup": "ffdhe8192", 00:12:46.037 "digest": "sha384", 00:12:46.037 "state": "completed" 00:12:46.037 }, 00:12:46.037 "cntlid": 95, 00:12:46.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:46.037 "listen_address": { 00:12:46.037 "adrfam": "IPv4", 00:12:46.037 "traddr": "10.0.0.3", 00:12:46.037 "trsvcid": "4420", 00:12:46.037 "trtype": "TCP" 00:12:46.037 }, 00:12:46.037 "peer_address": { 00:12:46.037 "adrfam": "IPv4", 00:12:46.037 "traddr": "10.0.0.1", 00:12:46.037 "trsvcid": "51160", 00:12:46.037 "trtype": "TCP" 00:12:46.037 }, 00:12:46.037 "qid": 0, 00:12:46.037 "state": "enabled", 00:12:46.037 "thread": "nvmf_tgt_poll_group_000" 00:12:46.037 } 00:12:46.037 ]' 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.037 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.296 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:46.296 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:46.863 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.863 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:46.863 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.863 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.863 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.863 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:46.863 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:46.863 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.863 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:46.863 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:47.122 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:47.122 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.122 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:47.122 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:47.122 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:47.122 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.122 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.122 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.122 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.381 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.381 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.381 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.381 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.640 00:12:47.640 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.640 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.640 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.899 { 00:12:47.899 "auth": { 00:12:47.899 "dhgroup": "null", 00:12:47.899 "digest": "sha512", 00:12:47.899 "state": "completed" 00:12:47.899 }, 00:12:47.899 "cntlid": 97, 00:12:47.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:47.899 "listen_address": { 00:12:47.899 "adrfam": "IPv4", 00:12:47.899 "traddr": "10.0.0.3", 00:12:47.899 "trsvcid": "4420", 00:12:47.899 "trtype": "TCP" 00:12:47.899 }, 00:12:47.899 "peer_address": { 00:12:47.899 "adrfam": "IPv4", 00:12:47.899 "traddr": "10.0.0.1", 00:12:47.899 "trsvcid": "51192", 00:12:47.899 "trtype": "TCP" 00:12:47.899 }, 00:12:47.899 "qid": 0, 00:12:47.899 "state": "enabled", 00:12:47.899 "thread": "nvmf_tgt_poll_group_000" 00:12:47.899 } 00:12:47.899 ]' 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.899 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.466 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:48.466 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:49.034 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.034 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:49.034 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.034 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.034 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.034 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.034 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:49.034 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.293 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.552 00:12:49.552 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.552 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.552 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.811 { 00:12:49.811 "auth": { 00:12:49.811 "dhgroup": "null", 00:12:49.811 "digest": "sha512", 00:12:49.811 "state": "completed" 00:12:49.811 }, 00:12:49.811 "cntlid": 99, 00:12:49.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:49.811 "listen_address": { 00:12:49.811 "adrfam": "IPv4", 00:12:49.811 "traddr": "10.0.0.3", 00:12:49.811 "trsvcid": "4420", 00:12:49.811 "trtype": "TCP" 00:12:49.811 }, 00:12:49.811 "peer_address": { 00:12:49.811 "adrfam": "IPv4", 00:12:49.811 "traddr": "10.0.0.1", 00:12:49.811 "trsvcid": "51216", 00:12:49.811 "trtype": "TCP" 00:12:49.811 }, 00:12:49.811 "qid": 0, 00:12:49.811 "state": "enabled", 00:12:49.811 "thread": "nvmf_tgt_poll_group_000" 00:12:49.811 } 00:12:49.811 ]' 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.811 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.380 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:50.380 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:50.949 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.949 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:50.949 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.949 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.949 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.949 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.949 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:50.949 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.207 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.465 00:12:51.465 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.465 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.465 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.722 { 00:12:51.722 "auth": { 00:12:51.722 "dhgroup": "null", 00:12:51.722 "digest": "sha512", 00:12:51.722 "state": "completed" 00:12:51.722 }, 00:12:51.722 "cntlid": 101, 00:12:51.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:51.722 "listen_address": { 00:12:51.722 "adrfam": "IPv4", 00:12:51.722 "traddr": "10.0.0.3", 00:12:51.722 "trsvcid": "4420", 00:12:51.722 "trtype": "TCP" 00:12:51.722 }, 00:12:51.722 "peer_address": { 00:12:51.722 "adrfam": "IPv4", 00:12:51.722 "traddr": "10.0.0.1", 00:12:51.722 "trsvcid": "51232", 00:12:51.722 "trtype": "TCP" 00:12:51.722 }, 00:12:51.722 "qid": 0, 00:12:51.722 "state": "enabled", 00:12:51.722 "thread": "nvmf_tgt_poll_group_000" 00:12:51.722 } 00:12:51.722 ]' 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.722 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.987 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:51.987 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:52.599 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.599 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:52.599 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.599 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.599 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.599 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.599 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:52.599 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:52.857 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:53.116 00:12:53.116 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.116 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.116 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.374 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.374 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.374 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.374 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.374 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.374 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.374 { 00:12:53.374 "auth": { 00:12:53.374 "dhgroup": "null", 00:12:53.374 "digest": "sha512", 00:12:53.374 "state": "completed" 00:12:53.374 }, 00:12:53.374 "cntlid": 103, 00:12:53.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:53.374 "listen_address": { 00:12:53.374 "adrfam": "IPv4", 00:12:53.374 "traddr": "10.0.0.3", 00:12:53.374 "trsvcid": "4420", 00:12:53.374 "trtype": "TCP" 00:12:53.374 }, 00:12:53.374 "peer_address": { 00:12:53.374 "adrfam": "IPv4", 00:12:53.374 "traddr": "10.0.0.1", 00:12:53.374 "trsvcid": "51266", 00:12:53.374 "trtype": "TCP" 00:12:53.374 }, 00:12:53.374 "qid": 0, 00:12:53.374 "state": "enabled", 00:12:53.374 "thread": "nvmf_tgt_poll_group_000" 00:12:53.374 } 00:12:53.374 ]' 00:12:53.374 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.374 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.374 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.374 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:53.374 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.685 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.685 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.686 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.686 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:53.686 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:12:54.252 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.252 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:54.252 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.252 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.252 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.252 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:54.252 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.252 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:54.252 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.509 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.767 00:12:55.024 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.024 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.024 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.281 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.281 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.281 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.281 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.281 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.281 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.281 { 00:12:55.281 "auth": { 00:12:55.281 "dhgroup": "ffdhe2048", 00:12:55.281 "digest": "sha512", 00:12:55.281 "state": "completed" 00:12:55.281 }, 00:12:55.281 "cntlid": 105, 00:12:55.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:55.281 "listen_address": { 00:12:55.281 "adrfam": "IPv4", 00:12:55.281 "traddr": "10.0.0.3", 00:12:55.281 "trsvcid": "4420", 00:12:55.281 "trtype": "TCP" 00:12:55.281 }, 00:12:55.281 "peer_address": { 00:12:55.281 "adrfam": "IPv4", 00:12:55.281 "traddr": "10.0.0.1", 00:12:55.281 "trsvcid": "51304", 00:12:55.281 "trtype": "TCP" 00:12:55.281 }, 00:12:55.281 "qid": 0, 00:12:55.281 "state": "enabled", 00:12:55.281 "thread": "nvmf_tgt_poll_group_000" 00:12:55.281 } 00:12:55.281 ]' 00:12:55.281 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.281 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.281 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.282 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:55.282 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.282 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.282 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.282 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.539 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:55.539 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:12:56.105 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.105 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:56.105 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.105 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.105 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.105 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.105 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:56.105 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.363 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.622 00:12:56.622 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.622 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.622 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.880 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.880 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.880 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.881 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.881 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.881 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.881 { 00:12:56.881 "auth": { 00:12:56.881 "dhgroup": "ffdhe2048", 00:12:56.881 "digest": "sha512", 00:12:56.881 "state": "completed" 00:12:56.881 }, 00:12:56.881 "cntlid": 107, 00:12:56.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:56.881 "listen_address": { 00:12:56.881 "adrfam": "IPv4", 00:12:56.881 "traddr": "10.0.0.3", 00:12:56.881 "trsvcid": "4420", 00:12:56.881 "trtype": "TCP" 00:12:56.881 }, 00:12:56.881 "peer_address": { 00:12:56.881 "adrfam": "IPv4", 00:12:56.881 "traddr": "10.0.0.1", 00:12:56.881 "trsvcid": "54916", 00:12:56.881 "trtype": "TCP" 00:12:56.881 }, 00:12:56.881 "qid": 0, 00:12:56.881 "state": "enabled", 00:12:56.881 "thread": "nvmf_tgt_poll_group_000" 00:12:56.881 } 00:12:56.881 ]' 00:12:56.881 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.881 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.881 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.881 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:56.881 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.139 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.139 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.139 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.398 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:57.398 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.965 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.224 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.224 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.224 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.224 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.483 00:12:58.483 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.483 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.483 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.741 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.741 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.741 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.741 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.741 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.741 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.741 { 00:12:58.741 "auth": { 00:12:58.741 "dhgroup": "ffdhe2048", 00:12:58.741 "digest": "sha512", 00:12:58.741 "state": "completed" 00:12:58.741 }, 00:12:58.741 "cntlid": 109, 00:12:58.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:12:58.741 "listen_address": { 00:12:58.741 "adrfam": "IPv4", 00:12:58.741 "traddr": "10.0.0.3", 00:12:58.741 "trsvcid": "4420", 00:12:58.741 "trtype": "TCP" 00:12:58.741 }, 00:12:58.741 "peer_address": { 00:12:58.741 "adrfam": "IPv4", 00:12:58.741 "traddr": "10.0.0.1", 00:12:58.741 "trsvcid": "54944", 00:12:58.741 "trtype": "TCP" 00:12:58.741 }, 00:12:58.741 "qid": 0, 00:12:58.741 "state": "enabled", 00:12:58.741 "thread": "nvmf_tgt_poll_group_000" 00:12:58.741 } 00:12:58.741 ]' 00:12:58.741 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.741 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.741 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.741 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:58.741 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.000 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.000 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.000 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.000 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:59.000 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:12:59.936 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.936 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:12:59.936 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.936 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.936 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.936 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.936 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:59.936 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:00.195 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:00.454 00:13:00.454 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.454 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.454 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.713 { 00:13:00.713 "auth": { 00:13:00.713 "dhgroup": "ffdhe2048", 00:13:00.713 "digest": "sha512", 00:13:00.713 "state": "completed" 00:13:00.713 }, 00:13:00.713 "cntlid": 111, 00:13:00.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:00.713 "listen_address": { 00:13:00.713 "adrfam": "IPv4", 00:13:00.713 "traddr": "10.0.0.3", 00:13:00.713 "trsvcid": "4420", 00:13:00.713 "trtype": "TCP" 00:13:00.713 }, 00:13:00.713 "peer_address": { 00:13:00.713 "adrfam": "IPv4", 00:13:00.713 "traddr": "10.0.0.1", 00:13:00.713 "trsvcid": "54968", 00:13:00.713 "trtype": "TCP" 00:13:00.713 }, 00:13:00.713 "qid": 0, 00:13:00.713 "state": "enabled", 00:13:00.713 "thread": "nvmf_tgt_poll_group_000" 00:13:00.713 } 00:13:00.713 ]' 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.713 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.972 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:00.972 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:01.540 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.540 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:01.540 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.540 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.540 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.540 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:01.540 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.540 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:01.540 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:01.799 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:01.800 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.800 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:01.800 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:01.800 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:01.800 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.800 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.800 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.800 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.800 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.800 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.800 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.800 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.368 00:13:02.368 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.368 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.368 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.626 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.626 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.626 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.626 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.626 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.626 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.626 { 00:13:02.626 "auth": { 00:13:02.626 "dhgroup": "ffdhe3072", 00:13:02.626 "digest": "sha512", 00:13:02.626 "state": "completed" 00:13:02.626 }, 00:13:02.626 "cntlid": 113, 00:13:02.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:02.626 "listen_address": { 00:13:02.626 "adrfam": "IPv4", 00:13:02.626 "traddr": "10.0.0.3", 00:13:02.626 "trsvcid": "4420", 00:13:02.626 "trtype": "TCP" 00:13:02.626 }, 00:13:02.626 "peer_address": { 00:13:02.626 "adrfam": "IPv4", 00:13:02.626 "traddr": "10.0.0.1", 00:13:02.626 "trsvcid": "55006", 00:13:02.626 "trtype": "TCP" 00:13:02.626 }, 00:13:02.626 "qid": 0, 00:13:02.626 "state": "enabled", 00:13:02.626 "thread": "nvmf_tgt_poll_group_000" 00:13:02.626 } 00:13:02.626 ]' 00:13:02.626 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.626 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.626 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.626 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:02.626 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.627 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.627 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.627 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.885 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:13:02.885 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:13:03.452 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.453 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:03.453 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.453 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.453 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.453 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.453 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:03.453 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:03.712 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:03.712 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.712 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.712 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:03.712 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:03.712 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.712 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.712 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.712 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.977 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.977 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.977 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.977 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.236 00:13:04.236 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.236 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.236 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.494 { 00:13:04.494 "auth": { 00:13:04.494 "dhgroup": "ffdhe3072", 00:13:04.494 "digest": "sha512", 00:13:04.494 "state": "completed" 00:13:04.494 }, 00:13:04.494 "cntlid": 115, 00:13:04.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:04.494 "listen_address": { 00:13:04.494 "adrfam": "IPv4", 00:13:04.494 "traddr": "10.0.0.3", 00:13:04.494 "trsvcid": "4420", 00:13:04.494 "trtype": "TCP" 00:13:04.494 }, 00:13:04.494 "peer_address": { 00:13:04.494 "adrfam": "IPv4", 00:13:04.494 "traddr": "10.0.0.1", 00:13:04.494 "trsvcid": "55038", 00:13:04.494 "trtype": "TCP" 00:13:04.494 }, 00:13:04.494 "qid": 0, 00:13:04.494 "state": "enabled", 00:13:04.494 "thread": "nvmf_tgt_poll_group_000" 00:13:04.494 } 00:13:04.494 ]' 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.494 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.752 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:13:04.752 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:13:05.318 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.318 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:05.318 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.318 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.318 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.318 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.318 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:05.318 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.142 00:13:06.142 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.142 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.142 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.142 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.142 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.142 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.142 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.400 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.400 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.400 { 00:13:06.400 "auth": { 00:13:06.400 "dhgroup": "ffdhe3072", 00:13:06.400 "digest": "sha512", 00:13:06.400 "state": "completed" 00:13:06.400 }, 00:13:06.400 "cntlid": 117, 00:13:06.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:06.400 "listen_address": { 00:13:06.400 "adrfam": "IPv4", 00:13:06.400 "traddr": "10.0.0.3", 00:13:06.400 "trsvcid": "4420", 00:13:06.400 "trtype": "TCP" 00:13:06.400 }, 00:13:06.400 "peer_address": { 00:13:06.400 "adrfam": "IPv4", 00:13:06.400 "traddr": "10.0.0.1", 00:13:06.400 "trsvcid": "43298", 00:13:06.400 "trtype": "TCP" 00:13:06.400 }, 00:13:06.400 "qid": 0, 00:13:06.400 "state": "enabled", 00:13:06.400 "thread": "nvmf_tgt_poll_group_000" 00:13:06.400 } 00:13:06.400 ]' 00:13:06.400 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.400 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.400 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.400 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:06.400 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.400 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.400 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.400 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.658 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:13:06.658 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:13:07.223 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.481 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.739 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.739 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:07.739 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:07.739 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:07.997 00:13:07.997 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.997 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.997 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.256 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.256 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.256 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.256 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.256 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.256 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.256 { 00:13:08.256 "auth": { 00:13:08.256 "dhgroup": "ffdhe3072", 00:13:08.256 "digest": "sha512", 00:13:08.256 "state": "completed" 00:13:08.256 }, 00:13:08.256 "cntlid": 119, 00:13:08.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:08.256 "listen_address": { 00:13:08.256 "adrfam": "IPv4", 00:13:08.256 "traddr": "10.0.0.3", 00:13:08.256 "trsvcid": "4420", 00:13:08.256 "trtype": "TCP" 00:13:08.256 }, 00:13:08.256 "peer_address": { 00:13:08.256 "adrfam": "IPv4", 00:13:08.256 "traddr": "10.0.0.1", 00:13:08.256 "trsvcid": "43318", 00:13:08.256 "trtype": "TCP" 00:13:08.256 }, 00:13:08.256 "qid": 0, 00:13:08.256 "state": "enabled", 00:13:08.256 "thread": "nvmf_tgt_poll_group_000" 00:13:08.256 } 00:13:08.256 ]' 00:13:08.256 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.256 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.256 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.256 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:08.256 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.256 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.256 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.256 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.515 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:08.515 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:09.450 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.450 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:09.450 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.450 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.450 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.450 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.450 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.450 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:09.450 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.450 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.016 00:13:10.016 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.016 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.016 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.016 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.016 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.016 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.016 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.275 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.275 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.275 { 00:13:10.275 "auth": { 00:13:10.275 "dhgroup": "ffdhe4096", 00:13:10.275 "digest": "sha512", 00:13:10.275 "state": "completed" 00:13:10.275 }, 00:13:10.275 "cntlid": 121, 00:13:10.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:10.275 "listen_address": { 00:13:10.275 "adrfam": "IPv4", 00:13:10.275 "traddr": "10.0.0.3", 00:13:10.275 "trsvcid": "4420", 00:13:10.275 "trtype": "TCP" 00:13:10.275 }, 00:13:10.275 "peer_address": { 00:13:10.275 "adrfam": "IPv4", 00:13:10.275 "traddr": "10.0.0.1", 00:13:10.275 "trsvcid": "43350", 00:13:10.275 "trtype": "TCP" 00:13:10.275 }, 00:13:10.275 "qid": 0, 00:13:10.275 "state": "enabled", 00:13:10.275 "thread": "nvmf_tgt_poll_group_000" 00:13:10.275 } 00:13:10.275 ]' 00:13:10.275 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.275 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.275 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.275 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:10.275 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.275 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.275 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.275 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.534 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:13:10.534 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:13:11.101 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.101 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:11.101 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.101 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.101 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.101 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.101 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:11.101 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.360 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.926 00:13:11.926 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.926 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.926 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.184 { 00:13:12.184 "auth": { 00:13:12.184 "dhgroup": "ffdhe4096", 00:13:12.184 "digest": "sha512", 00:13:12.184 "state": "completed" 00:13:12.184 }, 00:13:12.184 "cntlid": 123, 00:13:12.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:12.184 "listen_address": { 00:13:12.184 "adrfam": "IPv4", 00:13:12.184 "traddr": "10.0.0.3", 00:13:12.184 "trsvcid": "4420", 00:13:12.184 "trtype": "TCP" 00:13:12.184 }, 00:13:12.184 "peer_address": { 00:13:12.184 "adrfam": "IPv4", 00:13:12.184 "traddr": "10.0.0.1", 00:13:12.184 "trsvcid": "43374", 00:13:12.184 "trtype": "TCP" 00:13:12.184 }, 00:13:12.184 "qid": 0, 00:13:12.184 "state": "enabled", 00:13:12.184 "thread": "nvmf_tgt_poll_group_000" 00:13:12.184 } 00:13:12.184 ]' 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.184 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.443 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:13:12.443 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:13:13.017 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.017 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:13.017 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.017 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.017 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.017 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.017 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:13.017 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.286 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.853 00:13:13.853 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.853 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.853 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.111 { 00:13:14.111 "auth": { 00:13:14.111 "dhgroup": "ffdhe4096", 00:13:14.111 "digest": "sha512", 00:13:14.111 "state": "completed" 00:13:14.111 }, 00:13:14.111 "cntlid": 125, 00:13:14.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:14.111 "listen_address": { 00:13:14.111 "adrfam": "IPv4", 00:13:14.111 "traddr": "10.0.0.3", 00:13:14.111 "trsvcid": "4420", 00:13:14.111 "trtype": "TCP" 00:13:14.111 }, 00:13:14.111 "peer_address": { 00:13:14.111 "adrfam": "IPv4", 00:13:14.111 "traddr": "10.0.0.1", 00:13:14.111 "trsvcid": "43400", 00:13:14.111 "trtype": "TCP" 00:13:14.111 }, 00:13:14.111 "qid": 0, 00:13:14.111 "state": "enabled", 00:13:14.111 "thread": "nvmf_tgt_poll_group_000" 00:13:14.111 } 00:13:14.111 ]' 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.111 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.369 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:13:14.369 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:13:14.934 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.934 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:14.934 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.934 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.934 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.934 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.934 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:14.934 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:15.191 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:15.191 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.191 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:15.191 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:15.191 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:15.191 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.191 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:13:15.191 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.191 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.192 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.192 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:15.192 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.192 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.758 00:13:15.758 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.758 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.758 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.017 { 00:13:16.017 "auth": { 00:13:16.017 "dhgroup": "ffdhe4096", 00:13:16.017 "digest": "sha512", 00:13:16.017 "state": "completed" 00:13:16.017 }, 00:13:16.017 "cntlid": 127, 00:13:16.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:16.017 "listen_address": { 00:13:16.017 "adrfam": "IPv4", 00:13:16.017 "traddr": "10.0.0.3", 00:13:16.017 "trsvcid": "4420", 00:13:16.017 "trtype": "TCP" 00:13:16.017 }, 00:13:16.017 "peer_address": { 00:13:16.017 "adrfam": "IPv4", 00:13:16.017 "traddr": "10.0.0.1", 00:13:16.017 "trsvcid": "39290", 00:13:16.017 "trtype": "TCP" 00:13:16.017 }, 00:13:16.017 "qid": 0, 00:13:16.017 "state": "enabled", 00:13:16.017 "thread": "nvmf_tgt_poll_group_000" 00:13:16.017 } 00:13:16.017 ]' 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.017 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.584 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:16.584 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:17.152 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.152 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:17.152 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.152 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.152 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.152 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:17.152 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.152 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:17.152 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.410 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.667 00:13:17.924 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.924 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.924 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.181 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.181 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.181 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.181 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.181 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.181 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.181 { 00:13:18.181 "auth": { 00:13:18.181 "dhgroup": "ffdhe6144", 00:13:18.181 "digest": "sha512", 00:13:18.181 "state": "completed" 00:13:18.181 }, 00:13:18.181 "cntlid": 129, 00:13:18.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:18.181 "listen_address": { 00:13:18.181 "adrfam": "IPv4", 00:13:18.181 "traddr": "10.0.0.3", 00:13:18.182 "trsvcid": "4420", 00:13:18.182 "trtype": "TCP" 00:13:18.182 }, 00:13:18.182 "peer_address": { 00:13:18.182 "adrfam": "IPv4", 00:13:18.182 "traddr": "10.0.0.1", 00:13:18.182 "trsvcid": "39314", 00:13:18.182 "trtype": "TCP" 00:13:18.182 }, 00:13:18.182 "qid": 0, 00:13:18.182 "state": "enabled", 00:13:18.182 "thread": "nvmf_tgt_poll_group_000" 00:13:18.182 } 00:13:18.182 ]' 00:13:18.182 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.182 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:18.182 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.182 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:18.182 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.182 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.182 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.182 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.746 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:13:18.747 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:13:19.312 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.312 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:19.312 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.312 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.312 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.312 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.312 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:19.312 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.571 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.831 00:13:19.831 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.831 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.831 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.399 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.399 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.399 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.399 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.400 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.400 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.400 { 00:13:20.400 "auth": { 00:13:20.400 "dhgroup": "ffdhe6144", 00:13:20.400 "digest": "sha512", 00:13:20.400 "state": "completed" 00:13:20.400 }, 00:13:20.400 "cntlid": 131, 00:13:20.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:20.400 "listen_address": { 00:13:20.400 "adrfam": "IPv4", 00:13:20.400 "traddr": "10.0.0.3", 00:13:20.400 "trsvcid": "4420", 00:13:20.400 "trtype": "TCP" 00:13:20.400 }, 00:13:20.400 "peer_address": { 00:13:20.400 "adrfam": "IPv4", 00:13:20.400 "traddr": "10.0.0.1", 00:13:20.400 "trsvcid": "39344", 00:13:20.400 "trtype": "TCP" 00:13:20.400 }, 00:13:20.400 "qid": 0, 00:13:20.400 "state": "enabled", 00:13:20.400 "thread": "nvmf_tgt_poll_group_000" 00:13:20.400 } 00:13:20.400 ]' 00:13:20.400 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.400 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:20.400 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.400 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:20.400 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.400 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.400 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.400 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.659 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:13:20.659 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:13:21.227 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.486 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:21.486 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.486 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.486 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.486 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.486 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:21.486 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.747 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.006 00:13:22.266 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.266 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.266 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.266 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.266 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.266 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.266 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.525 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.525 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.525 { 00:13:22.525 "auth": { 00:13:22.526 "dhgroup": "ffdhe6144", 00:13:22.526 "digest": "sha512", 00:13:22.526 "state": "completed" 00:13:22.526 }, 00:13:22.526 "cntlid": 133, 00:13:22.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:22.526 "listen_address": { 00:13:22.526 "adrfam": "IPv4", 00:13:22.526 "traddr": "10.0.0.3", 00:13:22.526 "trsvcid": "4420", 00:13:22.526 "trtype": "TCP" 00:13:22.526 }, 00:13:22.526 "peer_address": { 00:13:22.526 "adrfam": "IPv4", 00:13:22.526 "traddr": "10.0.0.1", 00:13:22.526 "trsvcid": "39364", 00:13:22.526 "trtype": "TCP" 00:13:22.526 }, 00:13:22.526 "qid": 0, 00:13:22.526 "state": "enabled", 00:13:22.526 "thread": "nvmf_tgt_poll_group_000" 00:13:22.526 } 00:13:22.526 ]' 00:13:22.526 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.526 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:22.526 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.526 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:22.526 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.526 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.526 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.526 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.786 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:13:22.786 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:13:23.354 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.354 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:23.354 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.354 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.354 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.354 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.354 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:23.354 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:23.614 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.183 00:13:24.183 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.183 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.183 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.441 { 00:13:24.441 "auth": { 00:13:24.441 "dhgroup": "ffdhe6144", 00:13:24.441 "digest": "sha512", 00:13:24.441 "state": "completed" 00:13:24.441 }, 00:13:24.441 "cntlid": 135, 00:13:24.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:24.441 "listen_address": { 00:13:24.441 "adrfam": "IPv4", 00:13:24.441 "traddr": "10.0.0.3", 00:13:24.441 "trsvcid": "4420", 00:13:24.441 "trtype": "TCP" 00:13:24.441 }, 00:13:24.441 "peer_address": { 00:13:24.441 "adrfam": "IPv4", 00:13:24.441 "traddr": "10.0.0.1", 00:13:24.441 "trsvcid": "39408", 00:13:24.441 "trtype": "TCP" 00:13:24.441 }, 00:13:24.441 "qid": 0, 00:13:24.441 "state": "enabled", 00:13:24.441 "thread": "nvmf_tgt_poll_group_000" 00:13:24.441 } 00:13:24.441 ]' 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.441 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.699 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:24.699 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.634 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.893 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.893 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.893 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.893 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.462 00:13:26.462 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.462 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.462 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.721 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.721 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.721 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.721 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.721 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.721 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.721 { 00:13:26.721 "auth": { 00:13:26.721 "dhgroup": "ffdhe8192", 00:13:26.721 "digest": "sha512", 00:13:26.721 "state": "completed" 00:13:26.721 }, 00:13:26.721 "cntlid": 137, 00:13:26.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:26.721 "listen_address": { 00:13:26.721 "adrfam": "IPv4", 00:13:26.721 "traddr": "10.0.0.3", 00:13:26.721 "trsvcid": "4420", 00:13:26.721 "trtype": "TCP" 00:13:26.721 }, 00:13:26.722 "peer_address": { 00:13:26.722 "adrfam": "IPv4", 00:13:26.722 "traddr": "10.0.0.1", 00:13:26.722 "trsvcid": "58706", 00:13:26.722 "trtype": "TCP" 00:13:26.722 }, 00:13:26.722 "qid": 0, 00:13:26.722 "state": "enabled", 00:13:26.722 "thread": "nvmf_tgt_poll_group_000" 00:13:26.722 } 00:13:26.722 ]' 00:13:26.722 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.722 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.722 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.722 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:26.722 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.722 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.722 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.722 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.290 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:13:27.290 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:13:27.857 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.857 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:27.857 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.857 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.857 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.857 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.857 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:27.858 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:28.116 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:28.117 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.117 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.117 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:28.117 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:28.117 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.117 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.117 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.117 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.117 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.117 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.117 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.117 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.684 00:13:28.685 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.685 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.685 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.944 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.944 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.944 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.944 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.944 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.944 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.944 { 00:13:28.944 "auth": { 00:13:28.944 "dhgroup": "ffdhe8192", 00:13:28.944 "digest": "sha512", 00:13:28.944 "state": "completed" 00:13:28.944 }, 00:13:28.944 "cntlid": 139, 00:13:28.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:28.944 "listen_address": { 00:13:28.944 "adrfam": "IPv4", 00:13:28.944 "traddr": "10.0.0.3", 00:13:28.944 "trsvcid": "4420", 00:13:28.944 "trtype": "TCP" 00:13:28.944 }, 00:13:28.944 "peer_address": { 00:13:28.944 "adrfam": "IPv4", 00:13:28.944 "traddr": "10.0.0.1", 00:13:28.944 "trsvcid": "58736", 00:13:28.944 "trtype": "TCP" 00:13:28.944 }, 00:13:28.944 "qid": 0, 00:13:28.944 "state": "enabled", 00:13:28.944 "thread": "nvmf_tgt_poll_group_000" 00:13:28.944 } 00:13:28.944 ]' 00:13:28.944 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.944 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.944 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.944 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:28.944 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.203 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.203 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.203 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.462 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:13:29.462 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: --dhchap-ctrl-secret DHHC-1:02:M2M0MDFkNmM1ZThmNjNlZmZkYjQ4MzAwNzdkZjhjNDUxMTk5YjRiMWViNjkwYTZh5CSEYw==: 00:13:30.030 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.030 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:30.030 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.030 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.030 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.030 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.030 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:30.030 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:30.288 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:30.289 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.289 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:30.289 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:30.289 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:30.289 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.289 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.289 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.289 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.289 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.289 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.289 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.289 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.856 00:13:30.856 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.856 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.856 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.115 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.115 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.115 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.115 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.115 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.115 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.115 { 00:13:31.115 "auth": { 00:13:31.115 "dhgroup": "ffdhe8192", 00:13:31.115 "digest": "sha512", 00:13:31.115 "state": "completed" 00:13:31.115 }, 00:13:31.115 "cntlid": 141, 00:13:31.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:31.115 "listen_address": { 00:13:31.115 "adrfam": "IPv4", 00:13:31.115 "traddr": "10.0.0.3", 00:13:31.115 "trsvcid": "4420", 00:13:31.115 "trtype": "TCP" 00:13:31.115 }, 00:13:31.115 "peer_address": { 00:13:31.115 "adrfam": "IPv4", 00:13:31.115 "traddr": "10.0.0.1", 00:13:31.115 "trsvcid": "58756", 00:13:31.115 "trtype": "TCP" 00:13:31.115 }, 00:13:31.115 "qid": 0, 00:13:31.115 "state": "enabled", 00:13:31.115 "thread": "nvmf_tgt_poll_group_000" 00:13:31.115 } 00:13:31.115 ]' 00:13:31.115 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.115 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.115 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.115 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:31.115 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.375 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.375 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.375 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.635 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:13:31.635 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:01:YzhiNzFlNDY3YjQ2NGZhMjJiN2RkMzliY2IxZDc4NDVPh5SS: 00:13:32.204 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.204 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:32.204 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.204 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.204 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.204 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.204 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:32.204 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:32.464 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:33.031 00:13:33.031 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.031 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.031 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.290 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.290 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.290 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.290 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.290 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.290 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.290 { 00:13:33.290 "auth": { 00:13:33.290 "dhgroup": "ffdhe8192", 00:13:33.290 "digest": "sha512", 00:13:33.290 "state": "completed" 00:13:33.290 }, 00:13:33.290 "cntlid": 143, 00:13:33.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:33.290 "listen_address": { 00:13:33.290 "adrfam": "IPv4", 00:13:33.290 "traddr": "10.0.0.3", 00:13:33.290 "trsvcid": "4420", 00:13:33.290 "trtype": "TCP" 00:13:33.290 }, 00:13:33.290 "peer_address": { 00:13:33.290 "adrfam": "IPv4", 00:13:33.290 "traddr": "10.0.0.1", 00:13:33.290 "trsvcid": "58784", 00:13:33.290 "trtype": "TCP" 00:13:33.290 }, 00:13:33.290 "qid": 0, 00:13:33.290 "state": "enabled", 00:13:33.290 "thread": "nvmf_tgt_poll_group_000" 00:13:33.290 } 00:13:33.290 ]' 00:13:33.290 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.548 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.548 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.548 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:33.548 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.548 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.548 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.548 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.807 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:33.807 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:34.375 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.375 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:34.375 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.375 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.647 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.647 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:34.647 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:34.647 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:34.647 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:34.647 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:34.647 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:34.935 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.518 00:13:35.518 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.518 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.518 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.776 { 00:13:35.776 "auth": { 00:13:35.776 "dhgroup": "ffdhe8192", 00:13:35.776 "digest": "sha512", 00:13:35.776 "state": "completed" 00:13:35.776 }, 00:13:35.776 "cntlid": 145, 00:13:35.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:35.776 "listen_address": { 00:13:35.776 "adrfam": "IPv4", 00:13:35.776 "traddr": "10.0.0.3", 00:13:35.776 "trsvcid": "4420", 00:13:35.776 "trtype": "TCP" 00:13:35.776 }, 00:13:35.776 "peer_address": { 00:13:35.776 "adrfam": "IPv4", 00:13:35.776 "traddr": "10.0.0.1", 00:13:35.776 "trsvcid": "58806", 00:13:35.776 "trtype": "TCP" 00:13:35.776 }, 00:13:35.776 "qid": 0, 00:13:35.776 "state": "enabled", 00:13:35.776 "thread": "nvmf_tgt_poll_group_000" 00:13:35.776 } 00:13:35.776 ]' 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.776 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.341 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:13:36.341 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:00:M2E5YWZhMjUzMmI4YTMyNTU1NDA2YmU4NDk1NjliNmJlZjQwNjBlMWU0ZTI1NTJixklf/w==: --dhchap-ctrl-secret DHHC-1:03:ZDNjMDhjZjcxYWUyN2JmMTQ3MjdjYTg4NjVhODc3YjBlMGIzMjczNjM0OGNkNGFlNzMzMjg4MzBjODdhMDA0YQDD/Ws=: 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:36.909 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:37.476 2024/12/05 12:17:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:37.476 request: 00:13:37.476 { 00:13:37.476 "method": "bdev_nvme_attach_controller", 00:13:37.476 "params": { 00:13:37.476 "name": "nvme0", 00:13:37.476 "trtype": "tcp", 00:13:37.476 "traddr": "10.0.0.3", 00:13:37.476 "adrfam": "ipv4", 00:13:37.476 "trsvcid": "4420", 00:13:37.476 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:37.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:37.476 "prchk_reftag": false, 00:13:37.476 "prchk_guard": false, 00:13:37.476 "hdgst": false, 00:13:37.476 "ddgst": false, 00:13:37.476 "dhchap_key": "key2", 00:13:37.476 "allow_unrecognized_csi": false 00:13:37.476 } 00:13:37.476 } 00:13:37.476 Got JSON-RPC error response 00:13:37.476 GoRPCClient: error on JSON-RPC call 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.476 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:37.477 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:37.477 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:38.043 2024/12/05 12:17:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:38.043 request: 00:13:38.043 { 00:13:38.043 "method": "bdev_nvme_attach_controller", 00:13:38.043 "params": { 00:13:38.043 "name": "nvme0", 00:13:38.043 "trtype": "tcp", 00:13:38.043 "traddr": "10.0.0.3", 00:13:38.043 "adrfam": "ipv4", 00:13:38.043 "trsvcid": "4420", 00:13:38.043 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:38.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:38.043 "prchk_reftag": false, 00:13:38.043 "prchk_guard": false, 00:13:38.043 "hdgst": false, 00:13:38.043 "ddgst": false, 00:13:38.043 "dhchap_key": "key1", 00:13:38.043 "dhchap_ctrlr_key": "ckey2", 00:13:38.043 "allow_unrecognized_csi": false 00:13:38.043 } 00:13:38.043 } 00:13:38.043 Got JSON-RPC error response 00:13:38.043 GoRPCClient: error on JSON-RPC call 00:13:38.043 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:38.043 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.043 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.043 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.043 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:38.043 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.044 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.611 2024/12/05 12:17:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:38.611 request: 00:13:38.611 { 00:13:38.611 "method": "bdev_nvme_attach_controller", 00:13:38.611 "params": { 00:13:38.611 "name": "nvme0", 00:13:38.611 "trtype": "tcp", 00:13:38.611 "traddr": "10.0.0.3", 00:13:38.611 "adrfam": "ipv4", 00:13:38.611 "trsvcid": "4420", 00:13:38.611 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:38.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:38.611 "prchk_reftag": false, 00:13:38.611 "prchk_guard": false, 00:13:38.611 "hdgst": false, 00:13:38.611 "ddgst": false, 00:13:38.611 "dhchap_key": "key1", 00:13:38.611 "dhchap_ctrlr_key": "ckey1", 00:13:38.611 "allow_unrecognized_csi": false 00:13:38.611 } 00:13:38.611 } 00:13:38.611 Got JSON-RPC error response 00:13:38.611 GoRPCClient: error on JSON-RPC call 00:13:38.611 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:38.611 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 76223 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76223 ']' 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76223 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76223 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.612 killing process with pid 76223 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76223' 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76223 00:13:38.612 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76223 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81040 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81040 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81040 ']' 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.870 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81040 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81040 ']' 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.806 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.065 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.065 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:40.065 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:40.065 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.065 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.324 null0 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GjL 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.QgV ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QgV 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.b8y 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.0qF ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0qF 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.NmW 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.kQt ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kQt 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.e2C 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:40.324 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:41.259 nvme0n1 00:13:41.259 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.259 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.259 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.517 { 00:13:41.517 "auth": { 00:13:41.517 "dhgroup": "ffdhe8192", 00:13:41.517 "digest": "sha512", 00:13:41.517 "state": "completed" 00:13:41.517 }, 00:13:41.517 "cntlid": 1, 00:13:41.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:41.517 "listen_address": { 00:13:41.517 "adrfam": "IPv4", 00:13:41.517 "traddr": "10.0.0.3", 00:13:41.517 "trsvcid": "4420", 00:13:41.517 "trtype": "TCP" 00:13:41.517 }, 00:13:41.517 "peer_address": { 00:13:41.517 "adrfam": "IPv4", 00:13:41.517 "traddr": "10.0.0.1", 00:13:41.517 "trsvcid": "51712", 00:13:41.517 "trtype": "TCP" 00:13:41.517 }, 00:13:41.517 "qid": 0, 00:13:41.517 "state": "enabled", 00:13:41.517 "thread": "nvmf_tgt_poll_group_000" 00:13:41.517 } 00:13:41.517 ]' 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.517 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.082 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:42.082 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:42.648 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.649 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:42.649 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.649 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.649 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.649 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key3 00:13:42.649 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.649 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.649 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.649 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:42.649 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:42.907 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:42.907 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:42.907 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:42.907 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:42.907 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.907 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:42.907 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.907 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:42.907 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.907 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.164 2024/12/05 12:17:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:43.164 request: 00:13:43.164 { 00:13:43.164 "method": "bdev_nvme_attach_controller", 00:13:43.164 "params": { 00:13:43.164 "name": "nvme0", 00:13:43.164 "trtype": "tcp", 00:13:43.164 "traddr": "10.0.0.3", 00:13:43.164 "adrfam": "ipv4", 00:13:43.164 "trsvcid": "4420", 00:13:43.164 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:43.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:43.164 "prchk_reftag": false, 00:13:43.164 "prchk_guard": false, 00:13:43.164 "hdgst": false, 00:13:43.164 "ddgst": false, 00:13:43.164 "dhchap_key": "key3", 00:13:43.164 "allow_unrecognized_csi": false 00:13:43.164 } 00:13:43.164 } 00:13:43.164 Got JSON-RPC error response 00:13:43.164 GoRPCClient: error on JSON-RPC call 00:13:43.164 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:43.164 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.164 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.164 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.164 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:43.165 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:43.165 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:43.165 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:43.422 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:43.422 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:43.422 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:43.422 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:43.422 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.422 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:43.422 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.422 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:43.422 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.422 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.680 2024/12/05 12:17:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:43.680 request: 00:13:43.680 { 00:13:43.680 "method": "bdev_nvme_attach_controller", 00:13:43.680 "params": { 00:13:43.680 "name": "nvme0", 00:13:43.680 "trtype": "tcp", 00:13:43.680 "traddr": "10.0.0.3", 00:13:43.680 "adrfam": "ipv4", 00:13:43.680 "trsvcid": "4420", 00:13:43.680 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:43.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:43.680 "prchk_reftag": false, 00:13:43.680 "prchk_guard": false, 00:13:43.680 "hdgst": false, 00:13:43.680 "ddgst": false, 00:13:43.680 "dhchap_key": "key3", 00:13:43.680 "allow_unrecognized_csi": false 00:13:43.680 } 00:13:43.680 } 00:13:43.680 Got JSON-RPC error response 00:13:43.680 GoRPCClient: error on JSON-RPC call 00:13:43.680 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:43.680 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.680 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.680 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.680 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:43.680 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:43.680 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:43.680 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:43.680 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:43.680 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:43.938 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:44.503 2024/12/05 12:17:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:44.503 request: 00:13:44.503 { 00:13:44.503 "method": "bdev_nvme_attach_controller", 00:13:44.503 "params": { 00:13:44.503 "name": "nvme0", 00:13:44.503 "trtype": "tcp", 00:13:44.503 "traddr": "10.0.0.3", 00:13:44.503 "adrfam": "ipv4", 00:13:44.503 "trsvcid": "4420", 00:13:44.503 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:44.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:44.503 "prchk_reftag": false, 00:13:44.503 "prchk_guard": false, 00:13:44.503 "hdgst": false, 00:13:44.503 "ddgst": false, 00:13:44.503 "dhchap_key": "key0", 00:13:44.503 "dhchap_ctrlr_key": "key1", 00:13:44.503 "allow_unrecognized_csi": false 00:13:44.503 } 00:13:44.503 } 00:13:44.503 Got JSON-RPC error response 00:13:44.503 GoRPCClient: error on JSON-RPC call 00:13:44.503 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:44.503 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:44.503 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:44.503 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:44.503 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:44.503 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:44.503 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:44.761 nvme0n1 00:13:44.761 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:44.761 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.761 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:45.019 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.019 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.019 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.277 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 00:13:45.277 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.277 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.277 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.277 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:45.277 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:45.277 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:46.211 nvme0n1 00:13:46.211 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:46.211 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.211 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:46.470 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.470 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:46.470 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.470 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.470 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.470 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:46.470 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:46.470 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.037 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.037 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:47.037 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid bd2c7f89-972d-42a2-ab59-8d5d886644e3 -l 0 --dhchap-secret DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: --dhchap-ctrl-secret DHHC-1:03:OTAzYWNiOTU2OTFiMjVkOWI4NzMyMTFiMDJmZGMxNGE4ZWYzMDg1ZTlkMWYxY2M1NjI5YmY2MmEwMjcxZGMzMaMYzLo=: 00:13:47.603 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:47.603 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:47.603 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:47.603 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:47.603 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:47.603 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:47.603 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:47.603 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.603 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.860 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:47.861 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:47.861 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:47.861 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:47.861 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.861 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:47.861 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.861 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:47.861 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:47.861 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:48.426 2024/12/05 12:17:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:13:48.426 request: 00:13:48.426 { 00:13:48.426 "method": "bdev_nvme_attach_controller", 00:13:48.426 "params": { 00:13:48.426 "name": "nvme0", 00:13:48.426 "trtype": "tcp", 00:13:48.426 "traddr": "10.0.0.3", 00:13:48.426 "adrfam": "ipv4", 00:13:48.426 "trsvcid": "4420", 00:13:48.426 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:48.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3", 00:13:48.426 "prchk_reftag": false, 00:13:48.426 "prchk_guard": false, 00:13:48.426 "hdgst": false, 00:13:48.426 "ddgst": false, 00:13:48.426 "dhchap_key": "key1", 00:13:48.426 "allow_unrecognized_csi": false 00:13:48.426 } 00:13:48.426 } 00:13:48.426 Got JSON-RPC error response 00:13:48.426 GoRPCClient: error on JSON-RPC call 00:13:48.426 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:48.426 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:48.427 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:48.427 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:48.427 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:48.427 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:48.427 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:49.363 nvme0n1 00:13:49.363 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:49.363 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:49.363 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.623 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.623 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.623 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.882 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:49.882 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.882 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.882 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.882 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:49.882 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:49.882 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:50.141 nvme0n1 00:13:50.141 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:50.141 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.141 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:50.400 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.400 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.400 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: '' 2s 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: ]] 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGYzYmFhZWY1YTRkZDQxM2Q5ZjNkOThmZjM3MTRhNzZn7qPo: 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:50.659 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:53.193 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:53.193 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:53.193 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:53.193 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:53.193 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:53.193 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:53.193 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:53.193 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:53.193 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.193 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.193 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.193 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: 2s 00:13:53.194 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:53.194 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:53.194 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:53.194 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: 00:13:53.194 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:53.194 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:53.194 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:53.194 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: ]] 00:13:53.194 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NWUyMmU5N2ZlMjg4ODg0OGZhZWY0NjNlMGU2Zjg3MjZiMmRjYTI5NzU0NWU2NjJlAK69ig==: 00:13:53.194 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:53.194 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.117 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:55.118 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:55.118 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:55.703 nvme0n1 00:13:55.703 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:55.703 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.703 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.963 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.963 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:55.963 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:56.530 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:56.530 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:56.530 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.789 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.789 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:13:56.789 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.789 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.789 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.789 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:56.789 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:57.047 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:57.047 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.047 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:57.306 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:57.307 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:57.874 2024/12/05 12:17:28 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:13:57.874 request: 00:13:57.874 { 00:13:57.874 "method": "bdev_nvme_set_keys", 00:13:57.874 "params": { 00:13:57.874 "name": "nvme0", 00:13:57.874 "dhchap_key": "key1", 00:13:57.874 "dhchap_ctrlr_key": "key3" 00:13:57.874 } 00:13:57.874 } 00:13:57.874 Got JSON-RPC error response 00:13:57.874 GoRPCClient: error on JSON-RPC call 00:13:57.874 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:57.874 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:57.874 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:57.874 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:57.874 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:57.874 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.874 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:58.132 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:58.132 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:59.509 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:59.509 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:59.509 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.509 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:59.509 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:59.509 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.509 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.509 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.509 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:59.509 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:59.509 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:00.444 nvme0n1 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:00.444 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:01.011 2024/12/05 12:17:31 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:14:01.011 request: 00:14:01.011 { 00:14:01.011 "method": "bdev_nvme_set_keys", 00:14:01.011 "params": { 00:14:01.011 "name": "nvme0", 00:14:01.011 "dhchap_key": "key2", 00:14:01.011 "dhchap_ctrlr_key": "key0" 00:14:01.011 } 00:14:01.011 } 00:14:01.011 Got JSON-RPC error response 00:14:01.011 GoRPCClient: error on JSON-RPC call 00:14:01.011 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:01.011 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:01.011 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:01.011 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:01.011 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:01.011 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:01.011 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.270 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:01.529 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:02.464 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:02.465 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:02.465 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.722 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:02.722 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:02.722 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:02.723 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76248 00:14:02.723 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76248 ']' 00:14:02.723 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76248 00:14:02.723 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:02.723 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.723 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76248 00:14:02.723 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:02.723 killing process with pid 76248 00:14:02.723 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:02.723 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76248' 00:14:02.723 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76248 00:14:02.723 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76248 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:03.288 rmmod nvme_tcp 00:14:03.288 rmmod nvme_fabrics 00:14:03.288 rmmod nvme_keyring 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 81040 ']' 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 81040 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81040 ']' 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81040 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.288 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81040 00:14:03.288 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:03.288 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:03.288 killing process with pid 81040 00:14:03.288 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81040' 00:14:03.288 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81040 00:14:03.288 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81040 00:14:03.545 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:03.545 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:03.545 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:03.545 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:03.545 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:03.545 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:03.545 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.546 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.GjL /tmp/spdk.key-sha256.b8y /tmp/spdk.key-sha384.NmW /tmp/spdk.key-sha512.e2C /tmp/spdk.key-sha512.QgV /tmp/spdk.key-sha384.0qF /tmp/spdk.key-sha256.kQt '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:03.804 00:14:03.804 real 3m2.587s 00:14:03.804 user 7m24.213s 00:14:03.804 sys 0m22.920s 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.804 ************************************ 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.804 END TEST nvmf_auth_target 00:14:03.804 ************************************ 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:03.804 ************************************ 00:14:03.804 START TEST nvmf_bdevio_no_huge 00:14:03.804 ************************************ 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:03.804 * Looking for test storage... 00:14:03.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:03.804 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:04.064 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:04.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.065 --rc genhtml_branch_coverage=1 00:14:04.065 --rc genhtml_function_coverage=1 00:14:04.065 --rc genhtml_legend=1 00:14:04.065 --rc geninfo_all_blocks=1 00:14:04.065 --rc geninfo_unexecuted_blocks=1 00:14:04.065 00:14:04.065 ' 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:04.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.065 --rc genhtml_branch_coverage=1 00:14:04.065 --rc genhtml_function_coverage=1 00:14:04.065 --rc genhtml_legend=1 00:14:04.065 --rc geninfo_all_blocks=1 00:14:04.065 --rc geninfo_unexecuted_blocks=1 00:14:04.065 00:14:04.065 ' 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:04.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.065 --rc genhtml_branch_coverage=1 00:14:04.065 --rc genhtml_function_coverage=1 00:14:04.065 --rc genhtml_legend=1 00:14:04.065 --rc geninfo_all_blocks=1 00:14:04.065 --rc geninfo_unexecuted_blocks=1 00:14:04.065 00:14:04.065 ' 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:04.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.065 --rc genhtml_branch_coverage=1 00:14:04.065 --rc genhtml_function_coverage=1 00:14:04.065 --rc genhtml_legend=1 00:14:04.065 --rc geninfo_all_blocks=1 00:14:04.065 --rc geninfo_unexecuted_blocks=1 00:14:04.065 00:14:04.065 ' 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:04.065 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:04.065 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:04.066 Cannot find device "nvmf_init_br" 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:04.066 Cannot find device "nvmf_init_br2" 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:04.066 Cannot find device "nvmf_tgt_br" 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:04.066 Cannot find device "nvmf_tgt_br2" 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:04.066 Cannot find device "nvmf_init_br" 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:04.066 Cannot find device "nvmf_init_br2" 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:04.066 Cannot find device "nvmf_tgt_br" 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:04.066 Cannot find device "nvmf_tgt_br2" 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:04.066 Cannot find device "nvmf_br" 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:04.066 Cannot find device "nvmf_init_if" 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:04.066 Cannot find device "nvmf_init_if2" 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:04.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:04.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:04.066 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:04.326 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:04.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:04.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:14:04.326 00:14:04.326 --- 10.0.0.3 ping statistics --- 00:14:04.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.326 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:04.326 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:04.326 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:14:04.326 00:14:04.326 --- 10.0.0.4 ping statistics --- 00:14:04.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.326 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:04.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:14:04.326 00:14:04.326 --- 10.0.0.1 ping statistics --- 00:14:04.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.326 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:04.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:14:04.326 00:14:04.326 --- 10.0.0.2 ping statistics --- 00:14:04.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.326 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:04.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=81908 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 81908 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 81908 ']' 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.326 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:04.326 [2024-12-05 12:17:35.190140] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:04.326 [2024-12-05 12:17:35.190248] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:04.584 [2024-12-05 12:17:35.357650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:04.584 [2024-12-05 12:17:35.439214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.584 [2024-12-05 12:17:35.439272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.584 [2024-12-05 12:17:35.439299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.584 [2024-12-05 12:17:35.439311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.584 [2024-12-05 12:17:35.439321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.584 [2024-12-05 12:17:35.440245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:04.584 [2024-12-05 12:17:35.440377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:04.584 [2024-12-05 12:17:35.440831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:04.584 [2024-12-05 12:17:35.440839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:05.536 [2024-12-05 12:17:36.301903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:05.536 Malloc0 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:05.536 [2024-12-05 12:17:36.346523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:05.536 { 00:14:05.536 "params": { 00:14:05.536 "name": "Nvme$subsystem", 00:14:05.536 "trtype": "$TEST_TRANSPORT", 00:14:05.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:05.536 "adrfam": "ipv4", 00:14:05.536 "trsvcid": "$NVMF_PORT", 00:14:05.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:05.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:05.536 "hdgst": ${hdgst:-false}, 00:14:05.536 "ddgst": ${ddgst:-false} 00:14:05.536 }, 00:14:05.536 "method": "bdev_nvme_attach_controller" 00:14:05.536 } 00:14:05.536 EOF 00:14:05.536 )") 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:05.536 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:05.536 "params": { 00:14:05.536 "name": "Nvme1", 00:14:05.536 "trtype": "tcp", 00:14:05.536 "traddr": "10.0.0.3", 00:14:05.536 "adrfam": "ipv4", 00:14:05.536 "trsvcid": "4420", 00:14:05.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:05.536 "hdgst": false, 00:14:05.536 "ddgst": false 00:14:05.536 }, 00:14:05.536 "method": "bdev_nvme_attach_controller" 00:14:05.536 }' 00:14:05.794 [2024-12-05 12:17:36.416121] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:05.794 [2024-12-05 12:17:36.416726] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid81965 ] 00:14:05.794 [2024-12-05 12:17:36.584109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:06.051 [2024-12-05 12:17:36.672905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.051 [2024-12-05 12:17:36.673036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.051 [2024-12-05 12:17:36.673057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.309 I/O targets: 00:14:06.309 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:06.309 00:14:06.309 00:14:06.309 CUnit - A unit testing framework for C - Version 2.1-3 00:14:06.309 http://cunit.sourceforge.net/ 00:14:06.309 00:14:06.309 00:14:06.309 Suite: bdevio tests on: Nvme1n1 00:14:06.309 Test: blockdev write read block ...passed 00:14:06.309 Test: blockdev write zeroes read block ...passed 00:14:06.309 Test: blockdev write zeroes read no split ...passed 00:14:06.309 Test: blockdev write zeroes read split ...passed 00:14:06.309 Test: blockdev write zeroes read split partial ...passed 00:14:06.309 Test: blockdev reset ...[2024-12-05 12:17:37.042494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:06.309 [2024-12-05 12:17:37.042619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd5eb0 (9): Bad file descriptor 00:14:06.309 [2024-12-05 12:17:37.054093] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:06.309 passed 00:14:06.309 Test: blockdev write read 8 blocks ...passed 00:14:06.309 Test: blockdev write read size > 128k ...passed 00:14:06.309 Test: blockdev write read invalid size ...passed 00:14:06.309 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:06.309 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:06.309 Test: blockdev write read max offset ...passed 00:14:06.567 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:06.567 Test: blockdev writev readv 8 blocks ...passed 00:14:06.567 Test: blockdev writev readv 30 x 1block ...passed 00:14:06.567 Test: blockdev writev readv block ...passed 00:14:06.567 Test: blockdev writev readv size > 128k ...passed 00:14:06.567 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:06.567 Test: blockdev comparev and writev ...[2024-12-05 12:17:37.227373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.567 [2024-12-05 12:17:37.227544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:06.567 [2024-12-05 12:17:37.227634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.567 [2024-12-05 12:17:37.227754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:06.567 [2024-12-05 12:17:37.228228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.567 [2024-12-05 12:17:37.228399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:06.567 [2024-12-05 12:17:37.228508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.567 [2024-12-05 12:17:37.228584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:06.567 [2024-12-05 12:17:37.229059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.567 [2024-12-05 12:17:37.229204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:06.567 [2024-12-05 12:17:37.229278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.567 [2024-12-05 12:17:37.229377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:06.567 [2024-12-05 12:17:37.229868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.567 [2024-12-05 12:17:37.230005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:06.567 [2024-12-05 12:17:37.230076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:06.567 [2024-12-05 12:17:37.230141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:06.567 passed 00:14:06.567 Test: blockdev nvme passthru rw ...passed 00:14:06.568 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:17:37.312673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:06.568 [2024-12-05 12:17:37.312825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:06.568 [2024-12-05 12:17:37.313023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:06.568 [2024-12-05 12:17:37.313122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:06.568 [2024-12-05 12:17:37.313319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:06.568 [2024-12-05 12:17:37.313402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:06.568 [2024-12-05 12:17:37.313594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:06.568 [2024-12-05 12:17:37.313683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:06.568 passed 00:14:06.568 Test: blockdev nvme admin passthru ...passed 00:14:06.568 Test: blockdev copy ...passed 00:14:06.568 00:14:06.568 Run Summary: Type Total Ran Passed Failed Inactive 00:14:06.568 suites 1 1 n/a 0 0 00:14:06.568 tests 23 23 23 0 0 00:14:06.568 asserts 152 152 152 0 n/a 00:14:06.568 00:14:06.568 Elapsed time = 0.919 seconds 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.133 rmmod nvme_tcp 00:14:07.133 rmmod nvme_fabrics 00:14:07.133 rmmod nvme_keyring 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 81908 ']' 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 81908 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 81908 ']' 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 81908 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.133 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81908 00:14:07.134 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:07.134 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:07.134 killing process with pid 81908 00:14:07.134 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81908' 00:14:07.134 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 81908 00:14:07.134 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 81908 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:07.701 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:07.702 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.702 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.702 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.702 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:07.702 00:14:07.702 real 0m4.055s 00:14:07.702 user 0m13.965s 00:14:07.702 sys 0m1.489s 00:14:07.702 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.702 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:07.702 ************************************ 00:14:07.702 END TEST nvmf_bdevio_no_huge 00:14:07.702 ************************************ 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:07.962 ************************************ 00:14:07.962 START TEST nvmf_tls 00:14:07.962 ************************************ 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:07.962 * Looking for test storage... 00:14:07.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:07.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.962 --rc genhtml_branch_coverage=1 00:14:07.962 --rc genhtml_function_coverage=1 00:14:07.962 --rc genhtml_legend=1 00:14:07.962 --rc geninfo_all_blocks=1 00:14:07.962 --rc geninfo_unexecuted_blocks=1 00:14:07.962 00:14:07.962 ' 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:07.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.962 --rc genhtml_branch_coverage=1 00:14:07.962 --rc genhtml_function_coverage=1 00:14:07.962 --rc genhtml_legend=1 00:14:07.962 --rc geninfo_all_blocks=1 00:14:07.962 --rc geninfo_unexecuted_blocks=1 00:14:07.962 00:14:07.962 ' 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:07.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.962 --rc genhtml_branch_coverage=1 00:14:07.962 --rc genhtml_function_coverage=1 00:14:07.962 --rc genhtml_legend=1 00:14:07.962 --rc geninfo_all_blocks=1 00:14:07.962 --rc geninfo_unexecuted_blocks=1 00:14:07.962 00:14:07.962 ' 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:07.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.962 --rc genhtml_branch_coverage=1 00:14:07.962 --rc genhtml_function_coverage=1 00:14:07.962 --rc genhtml_legend=1 00:14:07.962 --rc geninfo_all_blocks=1 00:14:07.962 --rc geninfo_unexecuted_blocks=1 00:14:07.962 00:14:07.962 ' 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.962 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.963 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:07.963 Cannot find device "nvmf_init_br" 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:07.963 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:08.222 Cannot find device "nvmf_init_br2" 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:08.222 Cannot find device "nvmf_tgt_br" 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:08.222 Cannot find device "nvmf_tgt_br2" 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:08.222 Cannot find device "nvmf_init_br" 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:08.222 Cannot find device "nvmf_init_br2" 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:08.222 Cannot find device "nvmf_tgt_br" 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:08.222 Cannot find device "nvmf_tgt_br2" 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:08.222 Cannot find device "nvmf_br" 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:08.222 Cannot find device "nvmf_init_if" 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:08.222 Cannot find device "nvmf_init_if2" 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:08.222 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:08.222 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:08.222 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:08.222 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:08.222 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:08.222 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:08.222 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:08.222 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:08.222 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:08.222 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:08.222 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:08.222 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:08.222 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:08.480 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:08.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:08.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:14:08.481 00:14:08.481 --- 10.0.0.3 ping statistics --- 00:14:08.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.481 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:08.481 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:08.481 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:14:08.481 00:14:08.481 --- 10.0.0.4 ping statistics --- 00:14:08.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.481 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:08.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:08.481 00:14:08.481 --- 10.0.0.1 ping statistics --- 00:14:08.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.481 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:08.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:14:08.481 00:14:08.481 --- 10.0.0.2 ping statistics --- 00:14:08.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.481 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82202 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82202 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82202 ']' 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.481 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.481 [2024-12-05 12:17:39.294849] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:08.481 [2024-12-05 12:17:39.294952] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.739 [2024-12-05 12:17:39.451138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.739 [2024-12-05 12:17:39.502183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.739 [2024-12-05 12:17:39.502263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.739 [2024-12-05 12:17:39.502299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.739 [2024-12-05 12:17:39.502311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.739 [2024-12-05 12:17:39.502319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.739 [2024-12-05 12:17:39.502760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.739 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.739 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:08.739 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:08.739 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:08.739 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.739 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.739 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:08.739 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:08.997 true 00:14:08.997 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:08.997 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:09.255 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:09.255 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:09.255 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:09.515 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:09.515 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:09.773 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:09.773 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:09.773 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:10.031 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:10.031 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:10.289 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:10.289 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:10.289 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:10.289 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:10.548 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:10.548 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:10.548 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:10.806 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:10.806 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:11.064 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:11.064 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:11.064 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:11.323 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:11.323 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:11.581 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:11.860 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:11.860 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:11.860 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Uqy8kpGKhG 00:14:11.860 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:11.860 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.RXQYYKUNom 00:14:11.860 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:11.860 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:11.860 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Uqy8kpGKhG 00:14:11.860 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.RXQYYKUNom 00:14:11.860 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:11.860 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:12.433 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Uqy8kpGKhG 00:14:12.433 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Uqy8kpGKhG 00:14:12.433 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:12.433 [2024-12-05 12:17:43.205413] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.433 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:12.691 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:12.951 [2024-12-05 12:17:43.645450] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:12.951 [2024-12-05 12:17:43.645713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:12.951 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:13.210 malloc0 00:14:13.210 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:13.470 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Uqy8kpGKhG 00:14:13.470 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:13.729 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Uqy8kpGKhG 00:14:25.933 Initializing NVMe Controllers 00:14:25.933 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:25.933 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:25.933 Initialization complete. Launching workers. 00:14:25.933 ======================================================== 00:14:25.933 Latency(us) 00:14:25.933 Device Information : IOPS MiB/s Average min max 00:14:25.933 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11816.49 46.16 5417.27 800.98 7676.01 00:14:25.933 ======================================================== 00:14:25.933 Total : 11816.49 46.16 5417.27 800.98 7676.01 00:14:25.933 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Uqy8kpGKhG 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Uqy8kpGKhG 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82553 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82553 /var/tmp/bdevperf.sock 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82553 ']' 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:25.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.933 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.933 [2024-12-05 12:17:54.788595] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:25.933 [2024-12-05 12:17:54.788688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82553 ] 00:14:25.933 [2024-12-05 12:17:54.936533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.933 [2024-12-05 12:17:54.989056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.933 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.933 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:25.933 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Uqy8kpGKhG 00:14:25.933 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:25.933 [2024-12-05 12:17:55.589204] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:25.933 TLSTESTn1 00:14:25.933 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:25.933 Running I/O for 10 seconds... 00:14:27.305 4931.00 IOPS, 19.26 MiB/s [2024-12-05T12:17:59.107Z] 4999.00 IOPS, 19.53 MiB/s [2024-12-05T12:18:00.043Z] 5017.00 IOPS, 19.60 MiB/s [2024-12-05T12:18:00.975Z] 5024.75 IOPS, 19.63 MiB/s [2024-12-05T12:18:01.910Z] 5023.40 IOPS, 19.62 MiB/s [2024-12-05T12:18:02.847Z] 5032.50 IOPS, 19.66 MiB/s [2024-12-05T12:18:03.783Z] 5038.43 IOPS, 19.68 MiB/s [2024-12-05T12:18:05.160Z] 5038.12 IOPS, 19.68 MiB/s [2024-12-05T12:18:06.093Z] 5029.78 IOPS, 19.65 MiB/s [2024-12-05T12:18:06.093Z] 5023.20 IOPS, 19.62 MiB/s 00:14:35.224 Latency(us) 00:14:35.224 [2024-12-05T12:18:06.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.224 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:35.224 Verification LBA range: start 0x0 length 0x2000 00:14:35.224 TLSTESTn1 : 10.02 5027.56 19.64 0.00 0.00 25413.15 5808.87 22997.18 00:14:35.224 [2024-12-05T12:18:06.093Z] =================================================================================================================== 00:14:35.224 [2024-12-05T12:18:06.093Z] Total : 5027.56 19.64 0.00 0.00 25413.15 5808.87 22997.18 00:14:35.224 { 00:14:35.224 "results": [ 00:14:35.224 { 00:14:35.224 "job": "TLSTESTn1", 00:14:35.224 "core_mask": "0x4", 00:14:35.224 "workload": "verify", 00:14:35.224 "status": "finished", 00:14:35.224 "verify_range": { 00:14:35.224 "start": 0, 00:14:35.224 "length": 8192 00:14:35.224 }, 00:14:35.224 "queue_depth": 128, 00:14:35.224 "io_size": 4096, 00:14:35.224 "runtime": 10.016588, 00:14:35.224 "iops": 5027.5602830025555, 00:14:35.224 "mibps": 19.638907355478732, 00:14:35.224 "io_failed": 0, 00:14:35.224 "io_timeout": 0, 00:14:35.224 "avg_latency_us": 25413.149230127685, 00:14:35.224 "min_latency_us": 5808.872727272727, 00:14:35.224 "max_latency_us": 22997.17818181818 00:14:35.224 } 00:14:35.224 ], 00:14:35.224 "core_count": 1 00:14:35.224 } 00:14:35.224 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:35.224 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 82553 00:14:35.224 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82553 ']' 00:14:35.224 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82553 00:14:35.224 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:35.224 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.224 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82553 00:14:35.224 killing process with pid 82553 00:14:35.224 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.224 00:14:35.224 Latency(us) 00:14:35.224 [2024-12-05T12:18:06.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.224 [2024-12-05T12:18:06.093Z] =================================================================================================================== 00:14:35.224 [2024-12-05T12:18:06.093Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:35.224 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:35.224 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:35.224 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82553' 00:14:35.224 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82553 00:14:35.224 12:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82553 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RXQYYKUNom 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RXQYYKUNom 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RXQYYKUNom 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RXQYYKUNom 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82694 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82694 /var/tmp/bdevperf.sock 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82694 ']' 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.224 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.224 [2024-12-05 12:18:06.079261] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:35.225 [2024-12-05 12:18:06.079397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82694 ] 00:14:35.482 [2024-12-05 12:18:06.224321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.482 [2024-12-05 12:18:06.270980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.739 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.739 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:35.739 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RXQYYKUNom 00:14:35.996 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:36.254 [2024-12-05 12:18:06.935417] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:36.254 [2024-12-05 12:18:06.945238] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:36.254 [2024-12-05 12:18:06.946016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cdb00 (107): Transport endpoint is not connected 00:14:36.254 [2024-12-05 12:18:06.947009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cdb00 (9): Bad file descriptor 00:14:36.254 [2024-12-05 12:18:06.948004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:36.254 [2024-12-05 12:18:06.948040] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:36.254 [2024-12-05 12:18:06.948066] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:36.254 [2024-12-05 12:18:06.948080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:36.254 2024/12/05 12:18:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:36.254 request: 00:14:36.254 { 00:14:36.254 "method": "bdev_nvme_attach_controller", 00:14:36.254 "params": { 00:14:36.254 "name": "TLSTEST", 00:14:36.254 "trtype": "tcp", 00:14:36.254 "traddr": "10.0.0.3", 00:14:36.254 "adrfam": "ipv4", 00:14:36.254 "trsvcid": "4420", 00:14:36.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:36.254 "prchk_reftag": false, 00:14:36.254 "prchk_guard": false, 00:14:36.254 "hdgst": false, 00:14:36.254 "ddgst": false, 00:14:36.254 "psk": "key0", 00:14:36.254 "allow_unrecognized_csi": false 00:14:36.254 } 00:14:36.254 } 00:14:36.254 Got JSON-RPC error response 00:14:36.254 GoRPCClient: error on JSON-RPC call 00:14:36.254 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82694 00:14:36.254 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82694 ']' 00:14:36.254 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82694 00:14:36.254 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:36.254 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.254 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82694 00:14:36.254 killing process with pid 82694 00:14:36.254 Received shutdown signal, test time was about 10.000000 seconds 00:14:36.254 00:14:36.254 Latency(us) 00:14:36.254 [2024-12-05T12:18:07.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.254 [2024-12-05T12:18:07.123Z] =================================================================================================================== 00:14:36.254 [2024-12-05T12:18:07.123Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:36.254 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:36.254 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:36.254 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82694' 00:14:36.254 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82694 00:14:36.254 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82694 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Uqy8kpGKhG 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Uqy8kpGKhG 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Uqy8kpGKhG 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Uqy8kpGKhG 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82733 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82733 /var/tmp/bdevperf.sock 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82733 ']' 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:36.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.512 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.512 [2024-12-05 12:18:07.250449] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:36.512 [2024-12-05 12:18:07.250564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82733 ] 00:14:36.770 [2024-12-05 12:18:07.396606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.770 [2024-12-05 12:18:07.440939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.770 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.770 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:36.770 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Uqy8kpGKhG 00:14:37.028 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:37.287 [2024-12-05 12:18:07.997239] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:37.287 [2024-12-05 12:18:08.005198] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:37.287 [2024-12-05 12:18:08.005254] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:37.287 [2024-12-05 12:18:08.005324] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:37.287 [2024-12-05 12:18:08.006126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2134b00 (107): Transport endpoint is not connected 00:14:37.287 [2024-12-05 12:18:08.007119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2134b00 (9): Bad file descriptor 00:14:37.287 [2024-12-05 12:18:08.008116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:37.287 [2024-12-05 12:18:08.008137] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:37.287 [2024-12-05 12:18:08.008163] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:37.287 [2024-12-05 12:18:08.008181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:37.287 2024/12/05 12:18:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:37.287 request: 00:14:37.287 { 00:14:37.287 "method": "bdev_nvme_attach_controller", 00:14:37.287 "params": { 00:14:37.287 "name": "TLSTEST", 00:14:37.287 "trtype": "tcp", 00:14:37.287 "traddr": "10.0.0.3", 00:14:37.287 "adrfam": "ipv4", 00:14:37.287 "trsvcid": "4420", 00:14:37.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.287 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:37.287 "prchk_reftag": false, 00:14:37.287 "prchk_guard": false, 00:14:37.287 "hdgst": false, 00:14:37.287 "ddgst": false, 00:14:37.287 "psk": "key0", 00:14:37.287 "allow_unrecognized_csi": false 00:14:37.287 } 00:14:37.287 } 00:14:37.287 Got JSON-RPC error response 00:14:37.287 GoRPCClient: error on JSON-RPC call 00:14:37.287 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82733 00:14:37.287 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82733 ']' 00:14:37.287 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82733 00:14:37.287 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:37.287 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.287 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82733 00:14:37.287 killing process with pid 82733 00:14:37.287 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.287 00:14:37.287 Latency(us) 00:14:37.287 [2024-12-05T12:18:08.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.287 [2024-12-05T12:18:08.156Z] =================================================================================================================== 00:14:37.287 [2024-12-05T12:18:08.156Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:37.287 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:37.287 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:37.287 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82733' 00:14:37.287 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82733 00:14:37.287 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82733 00:14:37.546 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:37.546 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:37.546 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:37.546 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:37.546 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:37.546 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Uqy8kpGKhG 00:14:37.546 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:37.546 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Uqy8kpGKhG 00:14:37.546 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:37.546 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:37.546 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:37.546 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Uqy8kpGKhG 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Uqy8kpGKhG 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82772 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:37.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82772 /var/tmp/bdevperf.sock 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82772 ']' 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.547 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.547 [2024-12-05 12:18:08.282621] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:37.547 [2024-12-05 12:18:08.282708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82772 ] 00:14:37.806 [2024-12-05 12:18:08.421756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.806 [2024-12-05 12:18:08.471795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.806 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.806 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:37.806 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Uqy8kpGKhG 00:14:38.064 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:38.323 [2024-12-05 12:18:09.110578] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.323 [2024-12-05 12:18:09.115605] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:38.323 [2024-12-05 12:18:09.115698] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:38.323 [2024-12-05 12:18:09.115756] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:38.323 [2024-12-05 12:18:09.116343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f29b00 (107): Transport endpoint is not connected 00:14:38.323 [2024-12-05 12:18:09.117331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f29b00 (9): Bad file descriptor 00:14:38.323 [2024-12-05 12:18:09.118328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:38.323 [2024-12-05 12:18:09.118350] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:38.323 [2024-12-05 12:18:09.118360] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:38.323 [2024-12-05 12:18:09.118372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:38.323 2024/12/05 12:18:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:38.323 request: 00:14:38.323 { 00:14:38.323 "method": "bdev_nvme_attach_controller", 00:14:38.323 "params": { 00:14:38.323 "name": "TLSTEST", 00:14:38.323 "trtype": "tcp", 00:14:38.323 "traddr": "10.0.0.3", 00:14:38.323 "adrfam": "ipv4", 00:14:38.323 "trsvcid": "4420", 00:14:38.323 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:38.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.323 "prchk_reftag": false, 00:14:38.323 "prchk_guard": false, 00:14:38.323 "hdgst": false, 00:14:38.323 "ddgst": false, 00:14:38.323 "psk": "key0", 00:14:38.323 "allow_unrecognized_csi": false 00:14:38.323 } 00:14:38.323 } 00:14:38.323 Got JSON-RPC error response 00:14:38.323 GoRPCClient: error on JSON-RPC call 00:14:38.323 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82772 00:14:38.323 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82772 ']' 00:14:38.323 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82772 00:14:38.323 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:38.323 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.323 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82772 00:14:38.323 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:38.323 killing process with pid 82772 00:14:38.323 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:38.323 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82772' 00:14:38.323 Received shutdown signal, test time was about 10.000000 seconds 00:14:38.323 00:14:38.323 Latency(us) 00:14:38.323 [2024-12-05T12:18:09.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.323 [2024-12-05T12:18:09.192Z] =================================================================================================================== 00:14:38.323 [2024-12-05T12:18:09.192Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:38.323 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82772 00:14:38.323 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82772 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82811 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82811 /var/tmp/bdevperf.sock 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82811 ']' 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:38.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.582 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.582 [2024-12-05 12:18:09.412675] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:38.582 [2024-12-05 12:18:09.412793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82811 ] 00:14:38.841 [2024-12-05 12:18:09.550499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.841 [2024-12-05 12:18:09.602020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.100 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.100 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:39.100 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:39.100 [2024-12-05 12:18:09.939458] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:39.100 [2024-12-05 12:18:09.939499] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:39.100 2024/12/05 12:18:09 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:14:39.100 request: 00:14:39.100 { 00:14:39.100 "method": "keyring_file_add_key", 00:14:39.100 "params": { 00:14:39.100 "name": "key0", 00:14:39.100 "path": "" 00:14:39.100 } 00:14:39.100 } 00:14:39.100 Got JSON-RPC error response 00:14:39.100 GoRPCClient: error on JSON-RPC call 00:14:39.100 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:39.359 [2024-12-05 12:18:10.171661] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:39.359 [2024-12-05 12:18:10.171771] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:39.359 2024/12/05 12:18:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:14:39.359 request: 00:14:39.359 { 00:14:39.359 "method": "bdev_nvme_attach_controller", 00:14:39.359 "params": { 00:14:39.359 "name": "TLSTEST", 00:14:39.359 "trtype": "tcp", 00:14:39.359 "traddr": "10.0.0.3", 00:14:39.359 "adrfam": "ipv4", 00:14:39.359 "trsvcid": "4420", 00:14:39.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.359 "prchk_reftag": false, 00:14:39.359 "prchk_guard": false, 00:14:39.359 "hdgst": false, 00:14:39.359 "ddgst": false, 00:14:39.359 "psk": "key0", 00:14:39.359 "allow_unrecognized_csi": false 00:14:39.359 } 00:14:39.359 } 00:14:39.359 Got JSON-RPC error response 00:14:39.359 GoRPCClient: error on JSON-RPC call 00:14:39.359 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82811 00:14:39.359 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82811 ']' 00:14:39.359 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82811 00:14:39.359 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:39.359 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.359 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82811 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:39.618 killing process with pid 82811 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82811' 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82811 00:14:39.618 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.618 00:14:39.618 Latency(us) 00:14:39.618 [2024-12-05T12:18:10.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.618 [2024-12-05T12:18:10.487Z] =================================================================================================================== 00:14:39.618 [2024-12-05T12:18:10.487Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82811 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 82202 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82202 ']' 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82202 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82202 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:39.618 killing process with pid 82202 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82202' 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82202 00:14:39.618 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82202 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.OBQhAmcRBa 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.OBQhAmcRBa 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82860 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82860 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82860 ']' 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:39.878 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.878 [2024-12-05 12:18:10.730068] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:39.878 [2024-12-05 12:18:10.730170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.137 [2024-12-05 12:18:10.870669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.137 [2024-12-05 12:18:10.912237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.137 [2024-12-05 12:18:10.912317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.137 [2024-12-05 12:18:10.912345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.137 [2024-12-05 12:18:10.912352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.137 [2024-12-05 12:18:10.912358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.137 [2024-12-05 12:18:10.912737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.395 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.395 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:40.395 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:40.395 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:40.395 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.395 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.395 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.OBQhAmcRBa 00:14:40.395 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OBQhAmcRBa 00:14:40.395 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:40.653 [2024-12-05 12:18:11.276832] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.653 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:40.653 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:40.912 [2024-12-05 12:18:11.768956] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:40.912 [2024-12-05 12:18:11.769198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:41.172 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:41.172 malloc0 00:14:41.172 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:41.430 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OBQhAmcRBa 00:14:41.689 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OBQhAmcRBa 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OBQhAmcRBa 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82956 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82956 /var/tmp/bdevperf.sock 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82956 ']' 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.948 12:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.948 [2024-12-05 12:18:12.708499] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:41.948 [2024-12-05 12:18:12.708600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82956 ] 00:14:42.208 [2024-12-05 12:18:12.853861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.208 [2024-12-05 12:18:12.906909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.208 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.208 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:42.208 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OBQhAmcRBa 00:14:42.467 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:42.725 [2024-12-05 12:18:13.473771] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.726 TLSTESTn1 00:14:42.726 12:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:42.984 Running I/O for 10 seconds... 00:14:44.856 4746.00 IOPS, 18.54 MiB/s [2024-12-05T12:18:16.660Z] 4838.00 IOPS, 18.90 MiB/s [2024-12-05T12:18:18.036Z] 4858.67 IOPS, 18.98 MiB/s [2024-12-05T12:18:18.971Z] 4883.75 IOPS, 19.08 MiB/s [2024-12-05T12:18:19.906Z] 4891.00 IOPS, 19.11 MiB/s [2024-12-05T12:18:20.841Z] 4891.00 IOPS, 19.11 MiB/s [2024-12-05T12:18:21.777Z] 4891.71 IOPS, 19.11 MiB/s [2024-12-05T12:18:22.714Z] 4895.50 IOPS, 19.12 MiB/s [2024-12-05T12:18:24.091Z] 4894.00 IOPS, 19.12 MiB/s [2024-12-05T12:18:24.091Z] 4895.90 IOPS, 19.12 MiB/s 00:14:53.222 Latency(us) 00:14:53.222 [2024-12-05T12:18:24.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.222 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:53.222 Verification LBA range: start 0x0 length 0x2000 00:14:53.222 TLSTESTn1 : 10.01 4902.01 19.15 0.00 0.00 26068.07 4349.21 22878.02 00:14:53.222 [2024-12-05T12:18:24.091Z] =================================================================================================================== 00:14:53.222 [2024-12-05T12:18:24.091Z] Total : 4902.01 19.15 0.00 0.00 26068.07 4349.21 22878.02 00:14:53.222 { 00:14:53.222 "results": [ 00:14:53.222 { 00:14:53.222 "job": "TLSTESTn1", 00:14:53.222 "core_mask": "0x4", 00:14:53.222 "workload": "verify", 00:14:53.222 "status": "finished", 00:14:53.222 "verify_range": { 00:14:53.222 "start": 0, 00:14:53.222 "length": 8192 00:14:53.222 }, 00:14:53.222 "queue_depth": 128, 00:14:53.222 "io_size": 4096, 00:14:53.222 "runtime": 10.01344, 00:14:53.222 "iops": 4902.011696280199, 00:14:53.222 "mibps": 19.14848318859453, 00:14:53.222 "io_failed": 0, 00:14:53.222 "io_timeout": 0, 00:14:53.222 "avg_latency_us": 26068.0729098095, 00:14:53.222 "min_latency_us": 4349.2072727272725, 00:14:53.222 "max_latency_us": 22878.02181818182 00:14:53.222 } 00:14:53.222 ], 00:14:53.222 "core_count": 1 00:14:53.222 } 00:14:53.222 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:53.222 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 82956 00:14:53.222 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82956 ']' 00:14:53.222 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82956 00:14:53.222 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:53.222 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.222 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82956 00:14:53.222 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:53.222 killing process with pid 82956 00:14:53.222 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:53.222 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82956' 00:14:53.222 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.222 00:14:53.222 Latency(us) 00:14:53.222 [2024-12-05T12:18:24.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.222 [2024-12-05T12:18:24.091Z] =================================================================================================================== 00:14:53.222 [2024-12-05T12:18:24.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82956 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82956 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.OBQhAmcRBa 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OBQhAmcRBa 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OBQhAmcRBa 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OBQhAmcRBa 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OBQhAmcRBa 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83102 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83102 /var/tmp/bdevperf.sock 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83102 ']' 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.223 12:18:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.223 [2024-12-05 12:18:23.963448] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:53.223 [2024-12-05 12:18:23.963559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83102 ] 00:14:53.482 [2024-12-05 12:18:24.101370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.482 [2024-12-05 12:18:24.144030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.482 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.482 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:53.482 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OBQhAmcRBa 00:14:53.741 [2024-12-05 12:18:24.462240] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OBQhAmcRBa': 0100666 00:14:53.741 [2024-12-05 12:18:24.462272] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:53.741 2024/12/05 12:18:24 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.OBQhAmcRBa], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:14:53.741 request: 00:14:53.741 { 00:14:53.741 "method": "keyring_file_add_key", 00:14:53.741 "params": { 00:14:53.741 "name": "key0", 00:14:53.741 "path": "/tmp/tmp.OBQhAmcRBa" 00:14:53.741 } 00:14:53.741 } 00:14:53.741 Got JSON-RPC error response 00:14:53.741 GoRPCClient: error on JSON-RPC call 00:14:53.741 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:54.000 [2024-12-05 12:18:24.686437] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:54.000 [2024-12-05 12:18:24.686497] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:54.000 2024/12/05 12:18:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:14:54.000 request: 00:14:54.000 { 00:14:54.000 "method": "bdev_nvme_attach_controller", 00:14:54.000 "params": { 00:14:54.000 "name": "TLSTEST", 00:14:54.000 "trtype": "tcp", 00:14:54.000 "traddr": "10.0.0.3", 00:14:54.000 "adrfam": "ipv4", 00:14:54.000 "trsvcid": "4420", 00:14:54.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.000 "prchk_reftag": false, 00:14:54.000 "prchk_guard": false, 00:14:54.000 "hdgst": false, 00:14:54.000 "ddgst": false, 00:14:54.000 "psk": "key0", 00:14:54.000 "allow_unrecognized_csi": false 00:14:54.000 } 00:14:54.000 } 00:14:54.000 Got JSON-RPC error response 00:14:54.000 GoRPCClient: error on JSON-RPC call 00:14:54.000 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83102 00:14:54.000 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83102 ']' 00:14:54.000 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83102 00:14:54.000 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:54.000 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.000 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83102 00:14:54.000 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:54.000 killing process with pid 83102 00:14:54.000 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.000 00:14:54.000 Latency(us) 00:14:54.000 [2024-12-05T12:18:24.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.000 [2024-12-05T12:18:24.869Z] =================================================================================================================== 00:14:54.000 [2024-12-05T12:18:24.869Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:54.000 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:54.000 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83102' 00:14:54.000 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83102 00:14:54.000 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83102 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 82860 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82860 ']' 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82860 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82860 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:54.259 killing process with pid 82860 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82860' 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82860 00:14:54.259 12:18:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82860 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83146 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83146 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83146 ']' 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.517 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.517 [2024-12-05 12:18:25.210411] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:54.517 [2024-12-05 12:18:25.210501] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.517 [2024-12-05 12:18:25.353560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.774 [2024-12-05 12:18:25.393323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.774 [2024-12-05 12:18:25.393383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.774 [2024-12-05 12:18:25.393393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.774 [2024-12-05 12:18:25.393400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.774 [2024-12-05 12:18:25.393407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.774 [2024-12-05 12:18:25.393747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.774 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.774 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:54.774 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:54.774 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:54.775 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.775 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.775 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.OBQhAmcRBa 00:14:54.775 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:54.775 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.OBQhAmcRBa 00:14:54.775 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:14:54.775 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.775 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:14:54.775 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.775 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.OBQhAmcRBa 00:14:54.775 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OBQhAmcRBa 00:14:54.775 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:55.032 [2024-12-05 12:18:25.772005] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.032 12:18:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:55.290 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:55.548 [2024-12-05 12:18:26.264074] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:55.548 [2024-12-05 12:18:26.264360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:55.548 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:55.813 malloc0 00:14:55.813 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:56.109 12:18:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OBQhAmcRBa 00:14:56.370 [2024-12-05 12:18:26.995542] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.OBQhAmcRBa': 0100666 00:14:56.370 [2024-12-05 12:18:26.995580] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:56.370 2024/12/05 12:18:26 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.OBQhAmcRBa], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:14:56.370 request: 00:14:56.370 { 00:14:56.370 "method": "keyring_file_add_key", 00:14:56.370 "params": { 00:14:56.370 "name": "key0", 00:14:56.370 "path": "/tmp/tmp.OBQhAmcRBa" 00:14:56.370 } 00:14:56.370 } 00:14:56.370 Got JSON-RPC error response 00:14:56.370 GoRPCClient: error on JSON-RPC call 00:14:56.370 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:56.370 [2024-12-05 12:18:27.211622] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:56.370 [2024-12-05 12:18:27.211715] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:56.370 2024/12/05 12:18:27 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:14:56.370 request: 00:14:56.370 { 00:14:56.370 "method": "nvmf_subsystem_add_host", 00:14:56.370 "params": { 00:14:56.370 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.370 "host": "nqn.2016-06.io.spdk:host1", 00:14:56.370 "psk": "key0" 00:14:56.370 } 00:14:56.370 } 00:14:56.370 Got JSON-RPC error response 00:14:56.370 GoRPCClient: error on JSON-RPC call 00:14:56.370 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:56.370 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:56.370 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:56.370 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:56.370 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83146 00:14:56.370 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83146 ']' 00:14:56.370 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83146 00:14:56.370 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83146 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:56.645 killing process with pid 83146 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83146' 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83146 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83146 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.OBQhAmcRBa 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83256 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83256 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83256 ']' 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.645 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.645 [2024-12-05 12:18:27.508740] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:56.645 [2024-12-05 12:18:27.508817] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.903 [2024-12-05 12:18:27.648361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.903 [2024-12-05 12:18:27.688133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.903 [2024-12-05 12:18:27.688182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.903 [2024-12-05 12:18:27.688208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.903 [2024-12-05 12:18:27.688217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.903 [2024-12-05 12:18:27.688223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.903 [2024-12-05 12:18:27.688614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.162 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.162 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:57.162 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.162 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:57.162 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.162 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.162 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.OBQhAmcRBa 00:14:57.162 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OBQhAmcRBa 00:14:57.162 12:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:57.420 [2024-12-05 12:18:28.108099] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.420 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:57.678 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:57.935 [2024-12-05 12:18:28.608142] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:57.935 [2024-12-05 12:18:28.608411] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:57.935 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:58.193 malloc0 00:14:58.193 12:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:58.451 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OBQhAmcRBa 00:14:58.710 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:58.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.969 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83352 00:14:58.969 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:58.969 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:58.969 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83352 /var/tmp/bdevperf.sock 00:14:58.969 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83352 ']' 00:14:58.969 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.969 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.969 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.969 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.969 12:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.969 [2024-12-05 12:18:29.680660] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:14:58.969 [2024-12-05 12:18:29.680778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83352 ] 00:14:58.969 [2024-12-05 12:18:29.834647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.228 [2024-12-05 12:18:29.893215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.796 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.796 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:59.796 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OBQhAmcRBa 00:15:00.056 12:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:00.315 [2024-12-05 12:18:31.056193] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:00.315 TLSTESTn1 00:15:00.315 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:00.895 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:00.895 "subsystems": [ 00:15:00.895 { 00:15:00.895 "subsystem": "keyring", 00:15:00.895 "config": [ 00:15:00.895 { 00:15:00.895 "method": "keyring_file_add_key", 00:15:00.895 "params": { 00:15:00.895 "name": "key0", 00:15:00.895 "path": "/tmp/tmp.OBQhAmcRBa" 00:15:00.896 } 00:15:00.896 } 00:15:00.896 ] 00:15:00.896 }, 00:15:00.896 { 00:15:00.896 "subsystem": "iobuf", 00:15:00.896 "config": [ 00:15:00.896 { 00:15:00.896 "method": "iobuf_set_options", 00:15:00.896 "params": { 00:15:00.896 "enable_numa": false, 00:15:00.896 "large_bufsize": 135168, 00:15:00.896 "large_pool_count": 1024, 00:15:00.896 "small_bufsize": 8192, 00:15:00.896 "small_pool_count": 8192 00:15:00.896 } 00:15:00.896 } 00:15:00.896 ] 00:15:00.896 }, 00:15:00.896 { 00:15:00.896 "subsystem": "sock", 00:15:00.896 "config": [ 00:15:00.896 { 00:15:00.896 "method": "sock_set_default_impl", 00:15:00.896 "params": { 00:15:00.896 "impl_name": "posix" 00:15:00.896 } 00:15:00.896 }, 00:15:00.896 { 00:15:00.896 "method": "sock_impl_set_options", 00:15:00.896 "params": { 00:15:00.896 "enable_ktls": false, 00:15:00.896 "enable_placement_id": 0, 00:15:00.896 "enable_quickack": false, 00:15:00.896 "enable_recv_pipe": true, 00:15:00.896 "enable_zerocopy_send_client": false, 00:15:00.896 "enable_zerocopy_send_server": true, 00:15:00.896 "impl_name": "ssl", 00:15:00.896 "recv_buf_size": 4096, 00:15:00.896 "send_buf_size": 4096, 00:15:00.896 "tls_version": 0, 00:15:00.896 "zerocopy_threshold": 0 00:15:00.896 } 00:15:00.896 }, 00:15:00.896 { 00:15:00.896 "method": "sock_impl_set_options", 00:15:00.896 "params": { 00:15:00.896 "enable_ktls": false, 00:15:00.896 "enable_placement_id": 0, 00:15:00.896 "enable_quickack": false, 00:15:00.896 "enable_recv_pipe": true, 00:15:00.896 "enable_zerocopy_send_client": false, 00:15:00.896 "enable_zerocopy_send_server": true, 00:15:00.896 "impl_name": "posix", 00:15:00.896 "recv_buf_size": 2097152, 00:15:00.896 "send_buf_size": 2097152, 00:15:00.896 "tls_version": 0, 00:15:00.896 "zerocopy_threshold": 0 00:15:00.896 } 00:15:00.896 } 00:15:00.896 ] 00:15:00.896 }, 00:15:00.896 { 00:15:00.896 "subsystem": "vmd", 00:15:00.896 "config": [] 00:15:00.896 }, 00:15:00.896 { 00:15:00.896 "subsystem": "accel", 00:15:00.896 "config": [ 00:15:00.896 { 00:15:00.896 "method": "accel_set_options", 00:15:00.896 "params": { 00:15:00.896 "buf_count": 2048, 00:15:00.896 "large_cache_size": 16, 00:15:00.896 "sequence_count": 2048, 00:15:00.896 "small_cache_size": 128, 00:15:00.896 "task_count": 2048 00:15:00.896 } 00:15:00.896 } 00:15:00.896 ] 00:15:00.896 }, 00:15:00.896 { 00:15:00.896 "subsystem": "bdev", 00:15:00.896 "config": [ 00:15:00.896 { 00:15:00.896 "method": "bdev_set_options", 00:15:00.896 "params": { 00:15:00.896 "bdev_auto_examine": true, 00:15:00.896 "bdev_io_cache_size": 256, 00:15:00.896 "bdev_io_pool_size": 65535, 00:15:00.896 "iobuf_large_cache_size": 16, 00:15:00.896 "iobuf_small_cache_size": 128 00:15:00.896 } 00:15:00.896 }, 00:15:00.896 { 00:15:00.896 "method": "bdev_raid_set_options", 00:15:00.896 "params": { 00:15:00.896 "process_max_bandwidth_mb_sec": 0, 00:15:00.896 "process_window_size_kb": 1024 00:15:00.896 } 00:15:00.896 }, 00:15:00.896 { 00:15:00.896 "method": "bdev_iscsi_set_options", 00:15:00.896 "params": { 00:15:00.896 "timeout_sec": 30 00:15:00.896 } 00:15:00.896 }, 00:15:00.896 { 00:15:00.896 "method": "bdev_nvme_set_options", 00:15:00.896 "params": { 00:15:00.896 "action_on_timeout": "none", 00:15:00.896 "allow_accel_sequence": false, 00:15:00.896 "arbitration_burst": 0, 00:15:00.896 "bdev_retry_count": 3, 00:15:00.896 "ctrlr_loss_timeout_sec": 0, 00:15:00.896 "delay_cmd_submit": true, 00:15:00.896 "dhchap_dhgroups": [ 00:15:00.896 "null", 00:15:00.896 "ffdhe2048", 00:15:00.896 "ffdhe3072", 00:15:00.896 "ffdhe4096", 00:15:00.896 "ffdhe6144", 00:15:00.896 "ffdhe8192" 00:15:00.896 ], 00:15:00.896 "dhchap_digests": [ 00:15:00.896 "sha256", 00:15:00.896 "sha384", 00:15:00.896 "sha512" 00:15:00.896 ], 00:15:00.896 "disable_auto_failback": false, 00:15:00.896 "fast_io_fail_timeout_sec": 0, 00:15:00.896 "generate_uuids": false, 00:15:00.896 "high_priority_weight": 0, 00:15:00.896 "io_path_stat": false, 00:15:00.896 "io_queue_requests": 0, 00:15:00.896 "keep_alive_timeout_ms": 10000, 00:15:00.896 "low_priority_weight": 0, 00:15:00.896 "medium_priority_weight": 0, 00:15:00.896 "nvme_adminq_poll_period_us": 10000, 00:15:00.896 "nvme_error_stat": false, 00:15:00.896 "nvme_ioq_poll_period_us": 0, 00:15:00.896 "rdma_cm_event_timeout_ms": 0, 00:15:00.896 "rdma_max_cq_size": 0, 00:15:00.896 "rdma_srq_size": 0, 00:15:00.896 "reconnect_delay_sec": 0, 00:15:00.896 "timeout_admin_us": 0, 00:15:00.896 "timeout_us": 0, 00:15:00.896 "transport_ack_timeout": 0, 00:15:00.896 "transport_retry_count": 4, 00:15:00.896 "transport_tos": 0 00:15:00.896 } 00:15:00.896 }, 00:15:00.896 { 00:15:00.896 "method": "bdev_nvme_set_hotplug", 00:15:00.896 "params": { 00:15:00.896 "enable": false, 00:15:00.896 "period_us": 100000 00:15:00.896 } 00:15:00.896 }, 00:15:00.896 { 00:15:00.896 "method": "bdev_malloc_create", 00:15:00.896 "params": { 00:15:00.896 "block_size": 4096, 00:15:00.896 "dif_is_head_of_md": false, 00:15:00.896 "dif_pi_format": 0, 00:15:00.896 "dif_type": 0, 00:15:00.896 "md_size": 0, 00:15:00.896 "name": "malloc0", 00:15:00.896 "num_blocks": 8192, 00:15:00.896 "optimal_io_boundary": 0, 00:15:00.897 "physical_block_size": 4096, 00:15:00.897 "uuid": "e2c43c92-ce0f-41ba-865c-75d0cb8de42b" 00:15:00.897 } 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "method": "bdev_wait_for_examine" 00:15:00.897 } 00:15:00.897 ] 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "subsystem": "nbd", 00:15:00.897 "config": [] 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "subsystem": "scheduler", 00:15:00.897 "config": [ 00:15:00.897 { 00:15:00.897 "method": "framework_set_scheduler", 00:15:00.897 "params": { 00:15:00.897 "name": "static" 00:15:00.897 } 00:15:00.897 } 00:15:00.897 ] 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "subsystem": "nvmf", 00:15:00.897 "config": [ 00:15:00.897 { 00:15:00.897 "method": "nvmf_set_config", 00:15:00.897 "params": { 00:15:00.897 "admin_cmd_passthru": { 00:15:00.897 "identify_ctrlr": false 00:15:00.897 }, 00:15:00.897 "dhchap_dhgroups": [ 00:15:00.897 "null", 00:15:00.897 "ffdhe2048", 00:15:00.897 "ffdhe3072", 00:15:00.897 "ffdhe4096", 00:15:00.897 "ffdhe6144", 00:15:00.897 "ffdhe8192" 00:15:00.897 ], 00:15:00.897 "dhchap_digests": [ 00:15:00.897 "sha256", 00:15:00.897 "sha384", 00:15:00.897 "sha512" 00:15:00.897 ], 00:15:00.897 "discovery_filter": "match_any" 00:15:00.897 } 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "method": "nvmf_set_max_subsystems", 00:15:00.897 "params": { 00:15:00.897 "max_subsystems": 1024 00:15:00.897 } 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "method": "nvmf_set_crdt", 00:15:00.897 "params": { 00:15:00.897 "crdt1": 0, 00:15:00.897 "crdt2": 0, 00:15:00.897 "crdt3": 0 00:15:00.897 } 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "method": "nvmf_create_transport", 00:15:00.897 "params": { 00:15:00.897 "abort_timeout_sec": 1, 00:15:00.897 "ack_timeout": 0, 00:15:00.897 "buf_cache_size": 4294967295, 00:15:00.897 "c2h_success": false, 00:15:00.897 "data_wr_pool_size": 0, 00:15:00.897 "dif_insert_or_strip": false, 00:15:00.897 "in_capsule_data_size": 4096, 00:15:00.897 "io_unit_size": 131072, 00:15:00.897 "max_aq_depth": 128, 00:15:00.897 "max_io_qpairs_per_ctrlr": 127, 00:15:00.897 "max_io_size": 131072, 00:15:00.897 "max_queue_depth": 128, 00:15:00.897 "num_shared_buffers": 511, 00:15:00.897 "sock_priority": 0, 00:15:00.897 "trtype": "TCP", 00:15:00.897 "zcopy": false 00:15:00.897 } 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "method": "nvmf_create_subsystem", 00:15:00.897 "params": { 00:15:00.897 "allow_any_host": false, 00:15:00.897 "ana_reporting": false, 00:15:00.897 "max_cntlid": 65519, 00:15:00.897 "max_namespaces": 10, 00:15:00.897 "min_cntlid": 1, 00:15:00.897 "model_number": "SPDK bdev Controller", 00:15:00.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.897 "serial_number": "SPDK00000000000001" 00:15:00.897 } 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "method": "nvmf_subsystem_add_host", 00:15:00.897 "params": { 00:15:00.897 "host": "nqn.2016-06.io.spdk:host1", 00:15:00.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.897 "psk": "key0" 00:15:00.897 } 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "method": "nvmf_subsystem_add_ns", 00:15:00.897 "params": { 00:15:00.897 "namespace": { 00:15:00.897 "bdev_name": "malloc0", 00:15:00.897 "nguid": "E2C43C92CE0F41BA865C75D0CB8DE42B", 00:15:00.897 "no_auto_visible": false, 00:15:00.897 "nsid": 1, 00:15:00.897 "uuid": "e2c43c92-ce0f-41ba-865c-75d0cb8de42b" 00:15:00.897 }, 00:15:00.897 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:00.897 } 00:15:00.897 }, 00:15:00.897 { 00:15:00.897 "method": "nvmf_subsystem_add_listener", 00:15:00.897 "params": { 00:15:00.897 "listen_address": { 00:15:00.897 "adrfam": "IPv4", 00:15:00.897 "traddr": "10.0.0.3", 00:15:00.897 "trsvcid": "4420", 00:15:00.897 "trtype": "TCP" 00:15:00.897 }, 00:15:00.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.897 "secure_channel": true 00:15:00.897 } 00:15:00.897 } 00:15:00.897 ] 00:15:00.897 } 00:15:00.897 ] 00:15:00.897 }' 00:15:00.897 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:01.156 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:01.156 "subsystems": [ 00:15:01.156 { 00:15:01.156 "subsystem": "keyring", 00:15:01.156 "config": [ 00:15:01.156 { 00:15:01.156 "method": "keyring_file_add_key", 00:15:01.156 "params": { 00:15:01.156 "name": "key0", 00:15:01.157 "path": "/tmp/tmp.OBQhAmcRBa" 00:15:01.157 } 00:15:01.157 } 00:15:01.157 ] 00:15:01.157 }, 00:15:01.157 { 00:15:01.157 "subsystem": "iobuf", 00:15:01.157 "config": [ 00:15:01.157 { 00:15:01.157 "method": "iobuf_set_options", 00:15:01.157 "params": { 00:15:01.157 "enable_numa": false, 00:15:01.157 "large_bufsize": 135168, 00:15:01.157 "large_pool_count": 1024, 00:15:01.157 "small_bufsize": 8192, 00:15:01.157 "small_pool_count": 8192 00:15:01.157 } 00:15:01.157 } 00:15:01.157 ] 00:15:01.157 }, 00:15:01.157 { 00:15:01.157 "subsystem": "sock", 00:15:01.157 "config": [ 00:15:01.157 { 00:15:01.157 "method": "sock_set_default_impl", 00:15:01.157 "params": { 00:15:01.157 "impl_name": "posix" 00:15:01.157 } 00:15:01.157 }, 00:15:01.157 { 00:15:01.157 "method": "sock_impl_set_options", 00:15:01.157 "params": { 00:15:01.157 "enable_ktls": false, 00:15:01.157 "enable_placement_id": 0, 00:15:01.157 "enable_quickack": false, 00:15:01.157 "enable_recv_pipe": true, 00:15:01.157 "enable_zerocopy_send_client": false, 00:15:01.157 "enable_zerocopy_send_server": true, 00:15:01.157 "impl_name": "ssl", 00:15:01.157 "recv_buf_size": 4096, 00:15:01.157 "send_buf_size": 4096, 00:15:01.157 "tls_version": 0, 00:15:01.157 "zerocopy_threshold": 0 00:15:01.157 } 00:15:01.157 }, 00:15:01.157 { 00:15:01.157 "method": "sock_impl_set_options", 00:15:01.157 "params": { 00:15:01.157 "enable_ktls": false, 00:15:01.157 "enable_placement_id": 0, 00:15:01.157 "enable_quickack": false, 00:15:01.157 "enable_recv_pipe": true, 00:15:01.157 "enable_zerocopy_send_client": false, 00:15:01.157 "enable_zerocopy_send_server": true, 00:15:01.157 "impl_name": "posix", 00:15:01.157 "recv_buf_size": 2097152, 00:15:01.157 "send_buf_size": 2097152, 00:15:01.157 "tls_version": 0, 00:15:01.157 "zerocopy_threshold": 0 00:15:01.157 } 00:15:01.157 } 00:15:01.157 ] 00:15:01.157 }, 00:15:01.157 { 00:15:01.157 "subsystem": "vmd", 00:15:01.157 "config": [] 00:15:01.157 }, 00:15:01.157 { 00:15:01.157 "subsystem": "accel", 00:15:01.157 "config": [ 00:15:01.157 { 00:15:01.157 "method": "accel_set_options", 00:15:01.157 "params": { 00:15:01.157 "buf_count": 2048, 00:15:01.157 "large_cache_size": 16, 00:15:01.157 "sequence_count": 2048, 00:15:01.157 "small_cache_size": 128, 00:15:01.157 "task_count": 2048 00:15:01.157 } 00:15:01.157 } 00:15:01.157 ] 00:15:01.157 }, 00:15:01.157 { 00:15:01.157 "subsystem": "bdev", 00:15:01.157 "config": [ 00:15:01.157 { 00:15:01.157 "method": "bdev_set_options", 00:15:01.157 "params": { 00:15:01.157 "bdev_auto_examine": true, 00:15:01.157 "bdev_io_cache_size": 256, 00:15:01.157 "bdev_io_pool_size": 65535, 00:15:01.157 "iobuf_large_cache_size": 16, 00:15:01.157 "iobuf_small_cache_size": 128 00:15:01.157 } 00:15:01.157 }, 00:15:01.157 { 00:15:01.157 "method": "bdev_raid_set_options", 00:15:01.157 "params": { 00:15:01.157 "process_max_bandwidth_mb_sec": 0, 00:15:01.157 "process_window_size_kb": 1024 00:15:01.157 } 00:15:01.157 }, 00:15:01.157 { 00:15:01.157 "method": "bdev_iscsi_set_options", 00:15:01.157 "params": { 00:15:01.157 "timeout_sec": 30 00:15:01.157 } 00:15:01.157 }, 00:15:01.157 { 00:15:01.157 "method": "bdev_nvme_set_options", 00:15:01.157 "params": { 00:15:01.157 "action_on_timeout": "none", 00:15:01.157 "allow_accel_sequence": false, 00:15:01.157 "arbitration_burst": 0, 00:15:01.157 "bdev_retry_count": 3, 00:15:01.157 "ctrlr_loss_timeout_sec": 0, 00:15:01.157 "delay_cmd_submit": true, 00:15:01.157 "dhchap_dhgroups": [ 00:15:01.157 "null", 00:15:01.157 "ffdhe2048", 00:15:01.157 "ffdhe3072", 00:15:01.157 "ffdhe4096", 00:15:01.157 "ffdhe6144", 00:15:01.157 "ffdhe8192" 00:15:01.157 ], 00:15:01.157 "dhchap_digests": [ 00:15:01.157 "sha256", 00:15:01.157 "sha384", 00:15:01.157 "sha512" 00:15:01.157 ], 00:15:01.157 "disable_auto_failback": false, 00:15:01.157 "fast_io_fail_timeout_sec": 0, 00:15:01.157 "generate_uuids": false, 00:15:01.157 "high_priority_weight": 0, 00:15:01.157 "io_path_stat": false, 00:15:01.157 "io_queue_requests": 512, 00:15:01.157 "keep_alive_timeout_ms": 10000, 00:15:01.157 "low_priority_weight": 0, 00:15:01.157 "medium_priority_weight": 0, 00:15:01.157 "nvme_adminq_poll_period_us": 10000, 00:15:01.157 "nvme_error_stat": false, 00:15:01.157 "nvme_ioq_poll_period_us": 0, 00:15:01.157 "rdma_cm_event_timeout_ms": 0, 00:15:01.157 "rdma_max_cq_size": 0, 00:15:01.157 "rdma_srq_size": 0, 00:15:01.157 "reconnect_delay_sec": 0, 00:15:01.157 "timeout_admin_us": 0, 00:15:01.157 "timeout_us": 0, 00:15:01.157 "transport_ack_timeout": 0, 00:15:01.157 "transport_retry_count": 4, 00:15:01.157 "transport_tos": 0 00:15:01.157 } 00:15:01.157 }, 00:15:01.157 { 00:15:01.157 "method": "bdev_nvme_attach_controller", 00:15:01.157 "params": { 00:15:01.157 "adrfam": "IPv4", 00:15:01.157 "ctrlr_loss_timeout_sec": 0, 00:15:01.157 "ddgst": false, 00:15:01.157 "fast_io_fail_timeout_sec": 0, 00:15:01.157 "hdgst": false, 00:15:01.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.157 "multipath": "multipath", 00:15:01.157 "name": "TLSTEST", 00:15:01.157 "prchk_guard": false, 00:15:01.157 "prchk_reftag": false, 00:15:01.157 "psk": "key0", 00:15:01.157 "reconnect_delay_sec": 0, 00:15:01.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.157 "traddr": "10.0.0.3", 00:15:01.157 "trsvcid": "4420", 00:15:01.157 "trtype": "TCP" 00:15:01.157 } 00:15:01.157 }, 00:15:01.157 { 00:15:01.157 "method": "bdev_nvme_set_hotplug", 00:15:01.157 "params": { 00:15:01.157 "enable": false, 00:15:01.157 "period_us": 100000 00:15:01.157 } 00:15:01.158 }, 00:15:01.158 { 00:15:01.158 "method": "bdev_wait_for_examine" 00:15:01.158 } 00:15:01.158 ] 00:15:01.158 }, 00:15:01.158 { 00:15:01.158 "subsystem": "nbd", 00:15:01.158 "config": [] 00:15:01.158 } 00:15:01.158 ] 00:15:01.158 }' 00:15:01.158 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83352 00:15:01.158 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83352 ']' 00:15:01.158 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83352 00:15:01.158 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:01.158 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.158 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83352 00:15:01.158 killing process with pid 83352 00:15:01.158 Received shutdown signal, test time was about 10.000000 seconds 00:15:01.158 00:15:01.158 Latency(us) 00:15:01.158 [2024-12-05T12:18:32.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.158 [2024-12-05T12:18:32.027Z] =================================================================================================================== 00:15:01.158 [2024-12-05T12:18:32.027Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:01.158 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:01.158 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:01.158 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83352' 00:15:01.158 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83352 00:15:01.158 12:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83352 00:15:01.158 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83256 00:15:01.158 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83256 ']' 00:15:01.158 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83256 00:15:01.158 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:01.158 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.158 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83256 00:15:01.417 killing process with pid 83256 00:15:01.417 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:01.417 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:01.417 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83256' 00:15:01.417 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83256 00:15:01.417 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83256 00:15:01.417 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:01.417 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:01.417 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:01.417 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.417 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:01.417 "subsystems": [ 00:15:01.417 { 00:15:01.417 "subsystem": "keyring", 00:15:01.417 "config": [ 00:15:01.417 { 00:15:01.417 "method": "keyring_file_add_key", 00:15:01.417 "params": { 00:15:01.417 "name": "key0", 00:15:01.417 "path": "/tmp/tmp.OBQhAmcRBa" 00:15:01.417 } 00:15:01.417 } 00:15:01.417 ] 00:15:01.417 }, 00:15:01.417 { 00:15:01.417 "subsystem": "iobuf", 00:15:01.417 "config": [ 00:15:01.417 { 00:15:01.417 "method": "iobuf_set_options", 00:15:01.417 "params": { 00:15:01.417 "enable_numa": false, 00:15:01.417 "large_bufsize": 135168, 00:15:01.417 "large_pool_count": 1024, 00:15:01.417 "small_bufsize": 8192, 00:15:01.417 "small_pool_count": 8192 00:15:01.417 } 00:15:01.417 } 00:15:01.417 ] 00:15:01.417 }, 00:15:01.417 { 00:15:01.417 "subsystem": "sock", 00:15:01.417 "config": [ 00:15:01.417 { 00:15:01.417 "method": "sock_set_default_impl", 00:15:01.417 "params": { 00:15:01.417 "impl_name": "posix" 00:15:01.417 } 00:15:01.417 }, 00:15:01.417 { 00:15:01.417 "method": "sock_impl_set_options", 00:15:01.417 "params": { 00:15:01.417 "enable_ktls": false, 00:15:01.417 "enable_placement_id": 0, 00:15:01.417 "enable_quickack": false, 00:15:01.417 "enable_recv_pipe": true, 00:15:01.417 "enable_zerocopy_send_client": false, 00:15:01.417 "enable_zerocopy_send_server": true, 00:15:01.417 "impl_name": "ssl", 00:15:01.417 "recv_buf_size": 4096, 00:15:01.417 "send_buf_size": 4096, 00:15:01.417 "tls_version": 0, 00:15:01.417 "zerocopy_threshold": 0 00:15:01.417 } 00:15:01.417 }, 00:15:01.417 { 00:15:01.417 "method": "sock_impl_set_options", 00:15:01.417 "params": { 00:15:01.417 "enable_ktls": false, 00:15:01.417 "enable_placement_id": 0, 00:15:01.417 "enable_quickack": false, 00:15:01.417 "enable_recv_pipe": true, 00:15:01.417 "enable_zerocopy_send_client": false, 00:15:01.417 "enable_zerocopy_send_server": true, 00:15:01.417 "impl_name": "posix", 00:15:01.417 "recv_buf_size": 2097152, 00:15:01.417 "send_buf_size": 2097152, 00:15:01.417 "tls_version": 0, 00:15:01.417 "zerocopy_threshold": 0 00:15:01.417 } 00:15:01.417 } 00:15:01.417 ] 00:15:01.417 }, 00:15:01.417 { 00:15:01.417 "subsystem": "vmd", 00:15:01.417 "config": [] 00:15:01.417 }, 00:15:01.417 { 00:15:01.417 "subsystem": "accel", 00:15:01.417 "config": [ 00:15:01.417 { 00:15:01.417 "method": "accel_set_options", 00:15:01.417 "params": { 00:15:01.417 "buf_count": 2048, 00:15:01.417 "large_cache_size": 16, 00:15:01.417 "sequence_count": 2048, 00:15:01.417 "small_cache_size": 128, 00:15:01.417 "task_count": 2048 00:15:01.417 } 00:15:01.417 } 00:15:01.417 ] 00:15:01.417 }, 00:15:01.417 { 00:15:01.417 "subsystem": "bdev", 00:15:01.417 "config": [ 00:15:01.417 { 00:15:01.417 "method": "bdev_set_options", 00:15:01.417 "params": { 00:15:01.417 "bdev_auto_examine": true, 00:15:01.417 "bdev_io_cache_size": 256, 00:15:01.417 "bdev_io_pool_size": 65535, 00:15:01.417 "iobuf_large_cache_size": 16, 00:15:01.417 "iobuf_small_cache_size": 128 00:15:01.417 } 00:15:01.417 }, 00:15:01.417 { 00:15:01.417 "method": "bdev_raid_set_options", 00:15:01.417 "params": { 00:15:01.417 "process_max_bandwidth_mb_sec": 0, 00:15:01.417 "process_window_size_kb": 1024 00:15:01.417 } 00:15:01.417 }, 00:15:01.417 { 00:15:01.417 "method": "bdev_iscsi_set_options", 00:15:01.417 "params": { 00:15:01.417 "timeout_sec": 30 00:15:01.417 } 00:15:01.417 }, 00:15:01.417 { 00:15:01.417 "method": "bdev_nvme_set_options", 00:15:01.417 "params": { 00:15:01.417 "action_on_timeout": "none", 00:15:01.417 "allow_accel_sequence": false, 00:15:01.417 "arbitration_burst": 0, 00:15:01.417 "bdev_retry_count": 3, 00:15:01.418 "ctrlr_loss_timeout_sec": 0, 00:15:01.418 "delay_cmd_submit": true, 00:15:01.418 "dhchap_dhgroups": [ 00:15:01.418 "null", 00:15:01.418 "ffdhe2048", 00:15:01.418 "ffdhe3072", 00:15:01.418 "ffdhe4096", 00:15:01.418 "ffdhe6144", 00:15:01.418 "ffdhe8192" 00:15:01.418 ], 00:15:01.418 "dhchap_digests": [ 00:15:01.418 "sha256", 00:15:01.418 "sha384", 00:15:01.418 "sha512" 00:15:01.418 ], 00:15:01.418 "disable_auto_failback": false, 00:15:01.418 "fast_io_fail_timeout_sec": 0, 00:15:01.418 "generate_uuids": false, 00:15:01.418 "high_priority_weight": 0, 00:15:01.418 "io_path_stat": false, 00:15:01.418 "io_queue_requests": 0, 00:15:01.418 "keep_alive_timeout_ms": 10000, 00:15:01.418 "low_priority_weight": 0, 00:15:01.418 "medium_priority_weight": 0, 00:15:01.418 "nvme_adminq_poll_period_us": 10000, 00:15:01.418 "nvme_error_stat": false, 00:15:01.418 "nvme_ioq_poll_period_us": 0, 00:15:01.418 "rdma_cm_event_timeout_ms": 0, 00:15:01.418 "rdma_max_cq_size": 0, 00:15:01.418 "rdma_srq_size": 0, 00:15:01.418 "reconnect_delay_sec": 0, 00:15:01.418 "timeout_admin_us": 0, 00:15:01.418 "timeout_us": 0, 00:15:01.418 "transport_ack_timeout": 0, 00:15:01.418 "transport_retry_count": 4, 00:15:01.418 "transport_tos": 0 00:15:01.418 } 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "method": "bdev_nvme_set_hotplug", 00:15:01.418 "params": { 00:15:01.418 "enable": false, 00:15:01.418 "period_us": 100000 00:15:01.418 } 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "method": "bdev_malloc_create", 00:15:01.418 "params": { 00:15:01.418 "block_size": 4096, 00:15:01.418 "dif_is_head_of_md": false, 00:15:01.418 "dif_pi_format": 0, 00:15:01.418 "dif_type": 0, 00:15:01.418 "md_size": 0, 00:15:01.418 "name": "malloc0", 00:15:01.418 "num_blocks": 8192, 00:15:01.418 "optimal_io_boundary": 0, 00:15:01.418 "physical_block_size": 4096, 00:15:01.418 "uuid": "e2c43c92-ce0f-41ba-865c-75d0cb8de42b" 00:15:01.418 } 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "method": "bdev_wait_for_examine" 00:15:01.418 } 00:15:01.418 ] 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "subsystem": "nbd", 00:15:01.418 "config": [] 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "subsystem": "scheduler", 00:15:01.418 "config": [ 00:15:01.418 { 00:15:01.418 "method": "framework_set_scheduler", 00:15:01.418 "params": { 00:15:01.418 "name": "static" 00:15:01.418 } 00:15:01.418 } 00:15:01.418 ] 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "subsystem": "nvmf", 00:15:01.418 "config": [ 00:15:01.418 { 00:15:01.418 "method": "nvmf_set_config", 00:15:01.418 "params": { 00:15:01.418 "admin_cmd_passthru": { 00:15:01.418 "identify_ctrlr": false 00:15:01.418 }, 00:15:01.418 "dhchap_dhgroups": [ 00:15:01.418 "null", 00:15:01.418 "ffdhe2048", 00:15:01.418 "ffdhe3072", 00:15:01.418 "ffdhe4096", 00:15:01.418 "ffdhe6144", 00:15:01.418 "ffdhe8192" 00:15:01.418 ], 00:15:01.418 "dhchap_digests": [ 00:15:01.418 "sha256", 00:15:01.418 "sha384", 00:15:01.418 "sha512" 00:15:01.418 ], 00:15:01.418 "discovery_filter": "match_any" 00:15:01.418 } 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "method": "nvmf_set_max_subsystems", 00:15:01.418 "params": { 00:15:01.418 "max_subsystems": 1024 00:15:01.418 } 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "method": "nvmf_set_crdt", 00:15:01.418 "params": { 00:15:01.418 "crdt1": 0, 00:15:01.418 "crdt2": 0, 00:15:01.418 "crdt3": 0 00:15:01.418 } 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "method": "nvmf_create_transport", 00:15:01.418 "params": { 00:15:01.418 "abort_timeout_sec": 1, 00:15:01.418 "ack_timeout": 0, 00:15:01.418 "buf_cache_size": 4294967295, 00:15:01.418 "c2h_success": false, 00:15:01.418 "data_wr_pool_size": 0, 00:15:01.418 "dif_insert_or_strip": false, 00:15:01.418 "in_capsule_data_size": 4096, 00:15:01.418 "io_unit_size": 131072, 00:15:01.418 "max_aq_depth": 128, 00:15:01.418 "max_io_qpairs_per_ctrlr": 127, 00:15:01.418 "max_io_size": 131072, 00:15:01.418 "max_queue_depth": 128, 00:15:01.418 "num_shared_buffers": 511, 00:15:01.418 "sock_priority": 0, 00:15:01.418 "trtype": "TCP", 00:15:01.418 "zcopy": false 00:15:01.418 } 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "method": "nvmf_create_subsystem", 00:15:01.418 "params": { 00:15:01.418 "allow_any_host": false, 00:15:01.418 "ana_reporting": false, 00:15:01.418 "max_cntlid": 65519, 00:15:01.418 "max_namespaces": 10, 00:15:01.418 "min_cntlid": 1, 00:15:01.418 "model_number": "SPDK bdev Controller", 00:15:01.418 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.418 "serial_number": "SPDK00000000000001" 00:15:01.418 } 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "method": "nvmf_subsystem_add_host", 00:15:01.418 "params": { 00:15:01.418 "host": "nqn.2016-06.io.spdk:host1", 00:15:01.418 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.418 "psk": "key0" 00:15:01.418 } 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "method": "nvmf_subsystem_add_ns", 00:15:01.418 "params": { 00:15:01.418 "namespace": { 00:15:01.418 "bdev_name": "malloc0", 00:15:01.418 "nguid": "E2C43C92CE0F41BA865C75D0CB8DE42B", 00:15:01.418 "no_auto_visible": false, 00:15:01.418 "nsid": 1, 00:15:01.418 "uuid": "e2c43c92-ce0f-41ba-865c-75d0cb8de42b" 00:15:01.418 }, 00:15:01.418 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:01.418 } 00:15:01.418 }, 00:15:01.418 { 00:15:01.418 "method": "nvmf_subsystem_add_listener", 00:15:01.418 "params": { 00:15:01.418 "listen_address": { 00:15:01.418 "adrfam": "IPv4", 00:15:01.418 "traddr": "10.0.0.3", 00:15:01.418 "trsvcid": "4420", 00:15:01.418 "trtype": "TCP" 00:15:01.418 }, 00:15:01.418 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.418 "secure_channel": true 00:15:01.418 } 00:15:01.418 } 00:15:01.418 ] 00:15:01.418 } 00:15:01.418 ] 00:15:01.418 }' 00:15:01.418 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83432 00:15:01.418 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:01.418 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83432 00:15:01.418 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83432 ']' 00:15:01.418 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.418 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.418 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.418 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.418 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.677 [2024-12-05 12:18:32.285813] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:01.677 [2024-12-05 12:18:32.285913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.677 [2024-12-05 12:18:32.426926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.677 [2024-12-05 12:18:32.466962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.677 [2024-12-05 12:18:32.467033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.677 [2024-12-05 12:18:32.467043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.677 [2024-12-05 12:18:32.467051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.677 [2024-12-05 12:18:32.467058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.677 [2024-12-05 12:18:32.467449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.936 [2024-12-05 12:18:32.703707] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.936 [2024-12-05 12:18:32.735678] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:01.936 [2024-12-05 12:18:32.735919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83476 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83476 /var/tmp/bdevperf.sock 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83476 ']' 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:02.504 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:02.504 "subsystems": [ 00:15:02.504 { 00:15:02.504 "subsystem": "keyring", 00:15:02.504 "config": [ 00:15:02.504 { 00:15:02.504 "method": "keyring_file_add_key", 00:15:02.504 "params": { 00:15:02.504 "name": "key0", 00:15:02.504 "path": "/tmp/tmp.OBQhAmcRBa" 00:15:02.504 } 00:15:02.504 } 00:15:02.504 ] 00:15:02.504 }, 00:15:02.504 { 00:15:02.504 "subsystem": "iobuf", 00:15:02.504 "config": [ 00:15:02.504 { 00:15:02.504 "method": "iobuf_set_options", 00:15:02.504 "params": { 00:15:02.504 "enable_numa": false, 00:15:02.504 "large_bufsize": 135168, 00:15:02.504 "large_pool_count": 1024, 00:15:02.504 "small_bufsize": 8192, 00:15:02.504 "small_pool_count": 8192 00:15:02.504 } 00:15:02.504 } 00:15:02.504 ] 00:15:02.504 }, 00:15:02.504 { 00:15:02.504 "subsystem": "sock", 00:15:02.504 "config": [ 00:15:02.504 { 00:15:02.504 "method": "sock_set_default_impl", 00:15:02.504 "params": { 00:15:02.504 "impl_name": "posix" 00:15:02.504 } 00:15:02.504 }, 00:15:02.504 { 00:15:02.504 "method": "sock_impl_set_options", 00:15:02.504 "params": { 00:15:02.504 "enable_ktls": false, 00:15:02.504 "enable_placement_id": 0, 00:15:02.504 "enable_quickack": false, 00:15:02.504 "enable_recv_pipe": true, 00:15:02.504 "enable_zerocopy_send_client": false, 00:15:02.504 "enable_zerocopy_send_server": true, 00:15:02.504 "impl_name": "ssl", 00:15:02.504 "recv_buf_size": 4096, 00:15:02.504 "send_buf_size": 4096, 00:15:02.504 "tls_version": 0, 00:15:02.504 "zerocopy_threshold": 0 00:15:02.504 } 00:15:02.504 }, 00:15:02.504 { 00:15:02.504 "method": "sock_impl_set_options", 00:15:02.504 "params": { 00:15:02.504 "enable_ktls": false, 00:15:02.504 "enable_placement_id": 0, 00:15:02.504 "enable_quickack": false, 00:15:02.504 "enable_recv_pipe": true, 00:15:02.504 "enable_zerocopy_send_client": false, 00:15:02.504 "enable_zerocopy_send_server": true, 00:15:02.504 "impl_name": "posix", 00:15:02.504 "recv_buf_size": 2097152, 00:15:02.504 "send_buf_size": 2097152, 00:15:02.504 "tls_version": 0, 00:15:02.504 "zerocopy_threshold": 0 00:15:02.504 } 00:15:02.504 } 00:15:02.504 ] 00:15:02.504 }, 00:15:02.504 { 00:15:02.504 "subsystem": "vmd", 00:15:02.504 "config": [] 00:15:02.504 }, 00:15:02.504 { 00:15:02.504 "subsystem": "accel", 00:15:02.504 "config": [ 00:15:02.504 { 00:15:02.504 "method": "accel_set_options", 00:15:02.504 "params": { 00:15:02.504 "buf_count": 2048, 00:15:02.504 "large_cache_size": 16, 00:15:02.504 "sequence_count": 2048, 00:15:02.505 "small_cache_size": 128, 00:15:02.505 "task_count": 2048 00:15:02.505 } 00:15:02.505 } 00:15:02.505 ] 00:15:02.505 }, 00:15:02.505 { 00:15:02.505 "subsystem": "bdev", 00:15:02.505 "config": [ 00:15:02.505 { 00:15:02.505 "method": "bdev_set_options", 00:15:02.505 "params": { 00:15:02.505 "bdev_auto_examine": true, 00:15:02.505 "bdev_io_cache_size": 256, 00:15:02.505 "bdev_io_pool_size": 65535, 00:15:02.505 "iobuf_large_cache_size": 16, 00:15:02.505 "iobuf_small_cache_size": 128 00:15:02.505 } 00:15:02.505 }, 00:15:02.505 { 00:15:02.505 "method": "bdev_raid_set_options", 00:15:02.505 "params": { 00:15:02.505 "process_max_bandwidth_mb_sec": 0, 00:15:02.505 "process_window_size_kb": 1024 00:15:02.505 } 00:15:02.505 }, 00:15:02.505 { 00:15:02.505 "method": "bdev_iscsi_set_options", 00:15:02.505 "params": { 00:15:02.505 "timeout_sec": 30 00:15:02.505 } 00:15:02.505 }, 00:15:02.505 { 00:15:02.505 "method": "bdev_nvme_set_options", 00:15:02.505 "params": { 00:15:02.505 "action_on_timeout": "none", 00:15:02.505 "allow_accel_sequence": false, 00:15:02.505 "arbitration_burst": 0, 00:15:02.505 "bdev_retry_count": 3, 00:15:02.505 "ctrlr_loss_timeout_sec": 0, 00:15:02.505 "delay_cmd_submit": true, 00:15:02.505 "dhchap_dhgroups": [ 00:15:02.505 "null", 00:15:02.505 "ffdhe2048", 00:15:02.505 "ffdhe3072", 00:15:02.505 "ffdhe4096", 00:15:02.505 "ffdhe6144", 00:15:02.505 "ffdhe8192" 00:15:02.505 ], 00:15:02.505 "dhchap_digests": [ 00:15:02.505 "sha256", 00:15:02.505 "sha384", 00:15:02.505 "sha512" 00:15:02.505 ], 00:15:02.505 "disable_auto_failback": false, 00:15:02.505 "fast_io_fail_timeout_sec": 0, 00:15:02.505 "generate_uuids": false, 00:15:02.505 "high_priority_weight": 0, 00:15:02.505 "io_path_stat": false, 00:15:02.505 "io_queue_requests": 512, 00:15:02.505 "keep_alive_timeout_ms": 10000, 00:15:02.505 "low_priority_weight": 0, 00:15:02.505 "medium_priority_weight": 0, 00:15:02.505 "nvme_adminq_poll_period_us": 10000, 00:15:02.505 "nvme_error_stat": false, 00:15:02.505 "nvme_ioq_poll_period_us": 0, 00:15:02.505 "rdma_cm_event_timeout_ms": 0, 00:15:02.505 "rdma_max_cq_size": 0, 00:15:02.505 "rdma_srq_size": 0, 00:15:02.505 "reconnect_delay_sec": 0, 00:15:02.505 "timeout_admin_us": 0, 00:15:02.505 "timeout_us": 0, 00:15:02.505 "transport_ack_timeout": 0, 00:15:02.505 "transport_retry_count": 4, 00:15:02.505 "transport_tos": 0 00:15:02.505 } 00:15:02.505 }, 00:15:02.505 { 00:15:02.505 "method": "bdev_nvme_attach_controller", 00:15:02.505 "params": { 00:15:02.505 "adrfam": "IPv4", 00:15:02.505 "ctrlr_loss_timeout_sec": 0, 00:15:02.505 "ddgst": false, 00:15:02.505 "fast_io_fail_timeout_sec": 0, 00:15:02.505 "hdgst": false, 00:15:02.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.505 "multipath": "multipath", 00:15:02.505 "name": "TLSTEST", 00:15:02.505 "prchk_guard": false, 00:15:02.505 "prchk_reftag": false, 00:15:02.505 "psk": "key0", 00:15:02.505 "reconnect_delay_sec": 0, 00:15:02.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.505 "traddr": "10.0.0.3", 00:15:02.505 "trsvcid": "4420", 00:15:02.505 "trtype": "TCP" 00:15:02.505 } 00:15:02.505 }, 00:15:02.505 { 00:15:02.505 "method": "bdev_nvme_set_hotplug", 00:15:02.505 "params": { 00:15:02.505 "enable": false, 00:15:02.505 "period_us": 100000 00:15:02.505 } 00:15:02.505 }, 00:15:02.505 { 00:15:02.505 "method": "bdev_wait_for_examine" 00:15:02.505 } 00:15:02.505 ] 00:15:02.505 }, 00:15:02.505 { 00:15:02.505 "subsystem": "nbd", 00:15:02.505 "config": [] 00:15:02.505 } 00:15:02.505 ] 00:15:02.505 }' 00:15:02.505 [2024-12-05 12:18:33.350545] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:02.505 [2024-12-05 12:18:33.350657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83476 ] 00:15:02.764 [2024-12-05 12:18:33.503395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.764 [2024-12-05 12:18:33.555583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.024 [2024-12-05 12:18:33.736911] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:03.624 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.624 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:03.624 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:03.624 Running I/O for 10 seconds... 00:15:05.935 4844.00 IOPS, 18.92 MiB/s [2024-12-05T12:18:37.739Z] 4891.50 IOPS, 19.11 MiB/s [2024-12-05T12:18:38.674Z] 4923.33 IOPS, 19.23 MiB/s [2024-12-05T12:18:39.609Z] 4950.75 IOPS, 19.34 MiB/s [2024-12-05T12:18:40.553Z] 4952.60 IOPS, 19.35 MiB/s [2024-12-05T12:18:41.488Z] 4889.50 IOPS, 19.10 MiB/s [2024-12-05T12:18:42.864Z] 4867.71 IOPS, 19.01 MiB/s [2024-12-05T12:18:43.799Z] 4853.38 IOPS, 18.96 MiB/s [2024-12-05T12:18:44.755Z] 4831.11 IOPS, 18.87 MiB/s [2024-12-05T12:18:44.755Z] 4788.20 IOPS, 18.70 MiB/s 00:15:13.886 Latency(us) 00:15:13.886 [2024-12-05T12:18:44.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.886 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:13.886 Verification LBA range: start 0x0 length 0x2000 00:15:13.886 TLSTESTn1 : 10.01 4794.06 18.73 0.00 0.00 26657.65 4170.47 33125.47 00:15:13.886 [2024-12-05T12:18:44.755Z] =================================================================================================================== 00:15:13.886 [2024-12-05T12:18:44.755Z] Total : 4794.06 18.73 0.00 0.00 26657.65 4170.47 33125.47 00:15:13.886 { 00:15:13.886 "results": [ 00:15:13.886 { 00:15:13.886 "job": "TLSTESTn1", 00:15:13.886 "core_mask": "0x4", 00:15:13.886 "workload": "verify", 00:15:13.886 "status": "finished", 00:15:13.886 "verify_range": { 00:15:13.886 "start": 0, 00:15:13.886 "length": 8192 00:15:13.886 }, 00:15:13.886 "queue_depth": 128, 00:15:13.886 "io_size": 4096, 00:15:13.886 "runtime": 10.014477, 00:15:13.886 "iops": 4794.059639859375, 00:15:13.886 "mibps": 18.726795468200685, 00:15:13.886 "io_failed": 0, 00:15:13.886 "io_timeout": 0, 00:15:13.886 "avg_latency_us": 26657.651558955524, 00:15:13.886 "min_latency_us": 4170.472727272727, 00:15:13.886 "max_latency_us": 33125.46909090909 00:15:13.886 } 00:15:13.886 ], 00:15:13.886 "core_count": 1 00:15:13.886 } 00:15:13.886 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.886 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83476 00:15:13.886 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83476 ']' 00:15:13.886 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83476 00:15:13.886 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:13.886 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.886 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83476 00:15:13.886 killing process with pid 83476 00:15:13.886 Received shutdown signal, test time was about 10.000000 seconds 00:15:13.886 00:15:13.886 Latency(us) 00:15:13.886 [2024-12-05T12:18:44.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.886 [2024-12-05T12:18:44.755Z] =================================================================================================================== 00:15:13.886 [2024-12-05T12:18:44.755Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.886 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:13.886 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:13.886 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83476' 00:15:13.886 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83476 00:15:13.886 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83476 00:15:14.145 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83432 00:15:14.145 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83432 ']' 00:15:14.145 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83432 00:15:14.145 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:14.145 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.145 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83432 00:15:14.145 killing process with pid 83432 00:15:14.145 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:14.145 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:14.145 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83432' 00:15:14.145 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83432 00:15:14.145 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83432 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83628 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83628 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83628 ']' 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.404 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.404 [2024-12-05 12:18:45.084896] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:14.404 [2024-12-05 12:18:45.084992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.404 [2024-12-05 12:18:45.237458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.662 [2024-12-05 12:18:45.296955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.662 [2024-12-05 12:18:45.297409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.662 [2024-12-05 12:18:45.297435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.662 [2024-12-05 12:18:45.297449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.662 [2024-12-05 12:18:45.297458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.662 [2024-12-05 12:18:45.297962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.662 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.662 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:14.662 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:14.662 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:14.662 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.662 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.662 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.OBQhAmcRBa 00:15:14.662 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OBQhAmcRBa 00:15:14.662 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:14.925 [2024-12-05 12:18:45.680438] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.925 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:15.183 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:15.441 [2024-12-05 12:18:46.240511] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:15.441 [2024-12-05 12:18:46.240792] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:15.441 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:15.699 malloc0 00:15:15.699 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:15.957 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OBQhAmcRBa 00:15:16.217 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:16.475 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=83724 00:15:16.475 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:16.476 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:16.476 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 83724 /var/tmp/bdevperf.sock 00:15:16.476 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83724 ']' 00:15:16.476 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.476 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.476 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.476 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.476 12:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.476 [2024-12-05 12:18:47.287037] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:16.476 [2024-12-05 12:18:47.287123] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83724 ] 00:15:16.734 [2024-12-05 12:18:47.439706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.734 [2024-12-05 12:18:47.499440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.670 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.670 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:17.670 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OBQhAmcRBa 00:15:17.670 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:17.929 [2024-12-05 12:18:48.646962] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.929 nvme0n1 00:15:17.929 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:18.188 Running I/O for 1 seconds... 00:15:19.125 4725.00 IOPS, 18.46 MiB/s 00:15:19.125 Latency(us) 00:15:19.125 [2024-12-05T12:18:49.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.125 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:19.125 Verification LBA range: start 0x0 length 0x2000 00:15:19.125 nvme0n1 : 1.01 4789.26 18.71 0.00 0.00 26518.66 4825.83 20614.05 00:15:19.125 [2024-12-05T12:18:49.994Z] =================================================================================================================== 00:15:19.125 [2024-12-05T12:18:49.994Z] Total : 4789.26 18.71 0.00 0.00 26518.66 4825.83 20614.05 00:15:19.125 { 00:15:19.125 "results": [ 00:15:19.125 { 00:15:19.125 "job": "nvme0n1", 00:15:19.125 "core_mask": "0x2", 00:15:19.125 "workload": "verify", 00:15:19.125 "status": "finished", 00:15:19.125 "verify_range": { 00:15:19.125 "start": 0, 00:15:19.125 "length": 8192 00:15:19.125 }, 00:15:19.125 "queue_depth": 128, 00:15:19.125 "io_size": 4096, 00:15:19.125 "runtime": 1.013308, 00:15:19.125 "iops": 4789.264468453816, 00:15:19.125 "mibps": 18.70806432989772, 00:15:19.125 "io_failed": 0, 00:15:19.125 "io_timeout": 0, 00:15:19.125 "avg_latency_us": 26518.656309311955, 00:15:19.125 "min_latency_us": 4825.832727272727, 00:15:19.125 "max_latency_us": 20614.05090909091 00:15:19.125 } 00:15:19.125 ], 00:15:19.125 "core_count": 1 00:15:19.125 } 00:15:19.125 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 83724 00:15:19.125 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83724 ']' 00:15:19.125 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83724 00:15:19.125 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:19.125 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.125 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83724 00:15:19.125 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:19.125 killing process with pid 83724 00:15:19.125 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:19.125 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83724' 00:15:19.125 Received shutdown signal, test time was about 1.000000 seconds 00:15:19.125 00:15:19.125 Latency(us) 00:15:19.125 [2024-12-05T12:18:49.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.125 [2024-12-05T12:18:49.994Z] =================================================================================================================== 00:15:19.125 [2024-12-05T12:18:49.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.125 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83724 00:15:19.125 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83724 00:15:19.384 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 83628 00:15:19.384 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83628 ']' 00:15:19.384 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83628 00:15:19.384 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:19.384 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.384 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83628 00:15:19.384 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:19.384 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:19.384 killing process with pid 83628 00:15:19.384 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83628' 00:15:19.384 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83628 00:15:19.384 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83628 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83799 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83799 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83799 ']' 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.643 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.643 [2024-12-05 12:18:50.410385] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:19.643 [2024-12-05 12:18:50.410449] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.902 [2024-12-05 12:18:50.546952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.902 [2024-12-05 12:18:50.589161] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.902 [2024-12-05 12:18:50.589211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.902 [2024-12-05 12:18:50.589221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.902 [2024-12-05 12:18:50.589228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.902 [2024-12-05 12:18:50.589235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.902 [2024-12-05 12:18:50.589625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.839 [2024-12-05 12:18:51.407681] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.839 malloc0 00:15:20.839 [2024-12-05 12:18:51.438017] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:20.839 [2024-12-05 12:18:51.438221] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:20.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=83849 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 83849 /var/tmp/bdevperf.sock 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83849 ']' 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.839 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.839 [2024-12-05 12:18:51.528043] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:20.839 [2024-12-05 12:18:51.528146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83849 ] 00:15:20.839 [2024-12-05 12:18:51.668479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.098 [2024-12-05 12:18:51.738382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.665 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.665 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:21.665 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OBQhAmcRBa 00:15:21.924 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:22.183 [2024-12-05 12:18:52.966141] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:22.183 nvme0n1 00:15:22.442 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:22.442 Running I/O for 1 seconds... 00:15:23.379 4723.00 IOPS, 18.45 MiB/s 00:15:23.379 Latency(us) 00:15:23.379 [2024-12-05T12:18:54.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.379 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:23.379 Verification LBA range: start 0x0 length 0x2000 00:15:23.379 nvme0n1 : 1.01 4782.23 18.68 0.00 0.00 26548.76 5302.46 22520.55 00:15:23.379 [2024-12-05T12:18:54.248Z] =================================================================================================================== 00:15:23.379 [2024-12-05T12:18:54.248Z] Total : 4782.23 18.68 0.00 0.00 26548.76 5302.46 22520.55 00:15:23.379 { 00:15:23.379 "results": [ 00:15:23.379 { 00:15:23.379 "job": "nvme0n1", 00:15:23.379 "core_mask": "0x2", 00:15:23.379 "workload": "verify", 00:15:23.379 "status": "finished", 00:15:23.379 "verify_range": { 00:15:23.379 "start": 0, 00:15:23.379 "length": 8192 00:15:23.379 }, 00:15:23.379 "queue_depth": 128, 00:15:23.379 "io_size": 4096, 00:15:23.379 "runtime": 1.014381, 00:15:23.379 "iops": 4782.226796440391, 00:15:23.379 "mibps": 18.680573423595277, 00:15:23.379 "io_failed": 0, 00:15:23.379 "io_timeout": 0, 00:15:23.379 "avg_latency_us": 26548.75506530987, 00:15:23.379 "min_latency_us": 5302.458181818181, 00:15:23.379 "max_latency_us": 22520.552727272727 00:15:23.379 } 00:15:23.379 ], 00:15:23.379 "core_count": 1 00:15:23.379 } 00:15:23.379 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:23.379 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.379 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.638 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.638 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:23.638 "subsystems": [ 00:15:23.638 { 00:15:23.638 "subsystem": "keyring", 00:15:23.638 "config": [ 00:15:23.638 { 00:15:23.638 "method": "keyring_file_add_key", 00:15:23.638 "params": { 00:15:23.638 "name": "key0", 00:15:23.638 "path": "/tmp/tmp.OBQhAmcRBa" 00:15:23.638 } 00:15:23.638 } 00:15:23.638 ] 00:15:23.638 }, 00:15:23.638 { 00:15:23.638 "subsystem": "iobuf", 00:15:23.638 "config": [ 00:15:23.638 { 00:15:23.638 "method": "iobuf_set_options", 00:15:23.638 "params": { 00:15:23.638 "enable_numa": false, 00:15:23.638 "large_bufsize": 135168, 00:15:23.638 "large_pool_count": 1024, 00:15:23.638 "small_bufsize": 8192, 00:15:23.638 "small_pool_count": 8192 00:15:23.638 } 00:15:23.638 } 00:15:23.638 ] 00:15:23.638 }, 00:15:23.638 { 00:15:23.638 "subsystem": "sock", 00:15:23.638 "config": [ 00:15:23.638 { 00:15:23.638 "method": "sock_set_default_impl", 00:15:23.638 "params": { 00:15:23.638 "impl_name": "posix" 00:15:23.638 } 00:15:23.638 }, 00:15:23.638 { 00:15:23.638 "method": "sock_impl_set_options", 00:15:23.638 "params": { 00:15:23.638 "enable_ktls": false, 00:15:23.638 "enable_placement_id": 0, 00:15:23.638 "enable_quickack": false, 00:15:23.638 "enable_recv_pipe": true, 00:15:23.638 "enable_zerocopy_send_client": false, 00:15:23.638 "enable_zerocopy_send_server": true, 00:15:23.638 "impl_name": "ssl", 00:15:23.638 "recv_buf_size": 4096, 00:15:23.638 "send_buf_size": 4096, 00:15:23.638 "tls_version": 0, 00:15:23.638 "zerocopy_threshold": 0 00:15:23.638 } 00:15:23.638 }, 00:15:23.638 { 00:15:23.638 "method": "sock_impl_set_options", 00:15:23.638 "params": { 00:15:23.638 "enable_ktls": false, 00:15:23.638 "enable_placement_id": 0, 00:15:23.638 "enable_quickack": false, 00:15:23.638 "enable_recv_pipe": true, 00:15:23.638 "enable_zerocopy_send_client": false, 00:15:23.638 "enable_zerocopy_send_server": true, 00:15:23.638 "impl_name": "posix", 00:15:23.638 "recv_buf_size": 2097152, 00:15:23.638 "send_buf_size": 2097152, 00:15:23.638 "tls_version": 0, 00:15:23.638 "zerocopy_threshold": 0 00:15:23.638 } 00:15:23.638 } 00:15:23.638 ] 00:15:23.638 }, 00:15:23.638 { 00:15:23.638 "subsystem": "vmd", 00:15:23.638 "config": [] 00:15:23.638 }, 00:15:23.638 { 00:15:23.638 "subsystem": "accel", 00:15:23.638 "config": [ 00:15:23.638 { 00:15:23.638 "method": "accel_set_options", 00:15:23.638 "params": { 00:15:23.638 "buf_count": 2048, 00:15:23.638 "large_cache_size": 16, 00:15:23.638 "sequence_count": 2048, 00:15:23.638 "small_cache_size": 128, 00:15:23.638 "task_count": 2048 00:15:23.638 } 00:15:23.638 } 00:15:23.638 ] 00:15:23.638 }, 00:15:23.638 { 00:15:23.638 "subsystem": "bdev", 00:15:23.638 "config": [ 00:15:23.638 { 00:15:23.638 "method": "bdev_set_options", 00:15:23.638 "params": { 00:15:23.638 "bdev_auto_examine": true, 00:15:23.638 "bdev_io_cache_size": 256, 00:15:23.638 "bdev_io_pool_size": 65535, 00:15:23.638 "iobuf_large_cache_size": 16, 00:15:23.638 "iobuf_small_cache_size": 128 00:15:23.638 } 00:15:23.638 }, 00:15:23.638 { 00:15:23.638 "method": "bdev_raid_set_options", 00:15:23.638 "params": { 00:15:23.638 "process_max_bandwidth_mb_sec": 0, 00:15:23.638 "process_window_size_kb": 1024 00:15:23.638 } 00:15:23.638 }, 00:15:23.638 { 00:15:23.638 "method": "bdev_iscsi_set_options", 00:15:23.638 "params": { 00:15:23.638 "timeout_sec": 30 00:15:23.638 } 00:15:23.638 }, 00:15:23.638 { 00:15:23.638 "method": "bdev_nvme_set_options", 00:15:23.638 "params": { 00:15:23.638 "action_on_timeout": "none", 00:15:23.638 "allow_accel_sequence": false, 00:15:23.638 "arbitration_burst": 0, 00:15:23.638 "bdev_retry_count": 3, 00:15:23.638 "ctrlr_loss_timeout_sec": 0, 00:15:23.638 "delay_cmd_submit": true, 00:15:23.638 "dhchap_dhgroups": [ 00:15:23.638 "null", 00:15:23.638 "ffdhe2048", 00:15:23.638 "ffdhe3072", 00:15:23.638 "ffdhe4096", 00:15:23.638 "ffdhe6144", 00:15:23.638 "ffdhe8192" 00:15:23.638 ], 00:15:23.638 "dhchap_digests": [ 00:15:23.638 "sha256", 00:15:23.638 "sha384", 00:15:23.638 "sha512" 00:15:23.638 ], 00:15:23.638 "disable_auto_failback": false, 00:15:23.638 "fast_io_fail_timeout_sec": 0, 00:15:23.638 "generate_uuids": false, 00:15:23.638 "high_priority_weight": 0, 00:15:23.638 "io_path_stat": false, 00:15:23.638 "io_queue_requests": 0, 00:15:23.638 "keep_alive_timeout_ms": 10000, 00:15:23.638 "low_priority_weight": 0, 00:15:23.638 "medium_priority_weight": 0, 00:15:23.638 "nvme_adminq_poll_period_us": 10000, 00:15:23.638 "nvme_error_stat": false, 00:15:23.638 "nvme_ioq_poll_period_us": 0, 00:15:23.638 "rdma_cm_event_timeout_ms": 0, 00:15:23.638 "rdma_max_cq_size": 0, 00:15:23.638 "rdma_srq_size": 0, 00:15:23.638 "reconnect_delay_sec": 0, 00:15:23.638 "timeout_admin_us": 0, 00:15:23.638 "timeout_us": 0, 00:15:23.638 "transport_ack_timeout": 0, 00:15:23.638 "transport_retry_count": 4, 00:15:23.638 "transport_tos": 0 00:15:23.638 } 00:15:23.638 }, 00:15:23.638 { 00:15:23.638 "method": "bdev_nvme_set_hotplug", 00:15:23.638 "params": { 00:15:23.638 "enable": false, 00:15:23.638 "period_us": 100000 00:15:23.638 } 00:15:23.638 }, 00:15:23.638 { 00:15:23.638 "method": "bdev_malloc_create", 00:15:23.638 "params": { 00:15:23.638 "block_size": 4096, 00:15:23.638 "dif_is_head_of_md": false, 00:15:23.638 "dif_pi_format": 0, 00:15:23.638 "dif_type": 0, 00:15:23.639 "md_size": 0, 00:15:23.639 "name": "malloc0", 00:15:23.639 "num_blocks": 8192, 00:15:23.639 "optimal_io_boundary": 0, 00:15:23.639 "physical_block_size": 4096, 00:15:23.639 "uuid": "620461f0-1423-4875-afd2-51a34bc957fe" 00:15:23.639 } 00:15:23.639 }, 00:15:23.639 { 00:15:23.639 "method": "bdev_wait_for_examine" 00:15:23.639 } 00:15:23.639 ] 00:15:23.639 }, 00:15:23.639 { 00:15:23.639 "subsystem": "nbd", 00:15:23.639 "config": [] 00:15:23.639 }, 00:15:23.639 { 00:15:23.639 "subsystem": "scheduler", 00:15:23.639 "config": [ 00:15:23.639 { 00:15:23.639 "method": "framework_set_scheduler", 00:15:23.639 "params": { 00:15:23.639 "name": "static" 00:15:23.639 } 00:15:23.639 } 00:15:23.639 ] 00:15:23.639 }, 00:15:23.639 { 00:15:23.639 "subsystem": "nvmf", 00:15:23.639 "config": [ 00:15:23.639 { 00:15:23.639 "method": "nvmf_set_config", 00:15:23.639 "params": { 00:15:23.639 "admin_cmd_passthru": { 00:15:23.639 "identify_ctrlr": false 00:15:23.639 }, 00:15:23.639 "dhchap_dhgroups": [ 00:15:23.639 "null", 00:15:23.639 "ffdhe2048", 00:15:23.639 "ffdhe3072", 00:15:23.639 "ffdhe4096", 00:15:23.639 "ffdhe6144", 00:15:23.639 "ffdhe8192" 00:15:23.639 ], 00:15:23.639 "dhchap_digests": [ 00:15:23.639 "sha256", 00:15:23.639 "sha384", 00:15:23.639 "sha512" 00:15:23.639 ], 00:15:23.639 "discovery_filter": "match_any" 00:15:23.639 } 00:15:23.639 }, 00:15:23.639 { 00:15:23.639 "method": "nvmf_set_max_subsystems", 00:15:23.639 "params": { 00:15:23.639 "max_subsystems": 1024 00:15:23.639 } 00:15:23.639 }, 00:15:23.639 { 00:15:23.639 "method": "nvmf_set_crdt", 00:15:23.639 "params": { 00:15:23.639 "crdt1": 0, 00:15:23.639 "crdt2": 0, 00:15:23.639 "crdt3": 0 00:15:23.639 } 00:15:23.639 }, 00:15:23.639 { 00:15:23.639 "method": "nvmf_create_transport", 00:15:23.639 "params": { 00:15:23.639 "abort_timeout_sec": 1, 00:15:23.639 "ack_timeout": 0, 00:15:23.639 "buf_cache_size": 4294967295, 00:15:23.639 "c2h_success": false, 00:15:23.639 "data_wr_pool_size": 0, 00:15:23.639 "dif_insert_or_strip": false, 00:15:23.639 "in_capsule_data_size": 4096, 00:15:23.639 "io_unit_size": 131072, 00:15:23.639 "max_aq_depth": 128, 00:15:23.639 "max_io_qpairs_per_ctrlr": 127, 00:15:23.639 "max_io_size": 131072, 00:15:23.639 "max_queue_depth": 128, 00:15:23.639 "num_shared_buffers": 511, 00:15:23.639 "sock_priority": 0, 00:15:23.639 "trtype": "TCP", 00:15:23.639 "zcopy": false 00:15:23.639 } 00:15:23.639 }, 00:15:23.639 { 00:15:23.639 "method": "nvmf_create_subsystem", 00:15:23.639 "params": { 00:15:23.639 "allow_any_host": false, 00:15:23.639 "ana_reporting": false, 00:15:23.639 "max_cntlid": 65519, 00:15:23.639 "max_namespaces": 32, 00:15:23.639 "min_cntlid": 1, 00:15:23.639 "model_number": "SPDK bdev Controller", 00:15:23.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.639 "serial_number": "00000000000000000000" 00:15:23.639 } 00:15:23.639 }, 00:15:23.639 { 00:15:23.639 "method": "nvmf_subsystem_add_host", 00:15:23.639 "params": { 00:15:23.639 "host": "nqn.2016-06.io.spdk:host1", 00:15:23.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.639 "psk": "key0" 00:15:23.639 } 00:15:23.639 }, 00:15:23.639 { 00:15:23.639 "method": "nvmf_subsystem_add_ns", 00:15:23.639 "params": { 00:15:23.639 "namespace": { 00:15:23.639 "bdev_name": "malloc0", 00:15:23.639 "nguid": "620461F014234875AFD251A34BC957FE", 00:15:23.639 "no_auto_visible": false, 00:15:23.639 "nsid": 1, 00:15:23.639 "uuid": "620461f0-1423-4875-afd2-51a34bc957fe" 00:15:23.639 }, 00:15:23.639 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:23.639 } 00:15:23.639 }, 00:15:23.639 { 00:15:23.639 "method": "nvmf_subsystem_add_listener", 00:15:23.639 "params": { 00:15:23.639 "listen_address": { 00:15:23.639 "adrfam": "IPv4", 00:15:23.639 "traddr": "10.0.0.3", 00:15:23.639 "trsvcid": "4420", 00:15:23.639 "trtype": "TCP" 00:15:23.639 }, 00:15:23.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.639 "secure_channel": false, 00:15:23.639 "sock_impl": "ssl" 00:15:23.639 } 00:15:23.639 } 00:15:23.639 ] 00:15:23.639 } 00:15:23.639 ] 00:15:23.639 }' 00:15:23.639 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:23.897 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:23.897 "subsystems": [ 00:15:23.897 { 00:15:23.897 "subsystem": "keyring", 00:15:23.897 "config": [ 00:15:23.897 { 00:15:23.897 "method": "keyring_file_add_key", 00:15:23.897 "params": { 00:15:23.897 "name": "key0", 00:15:23.897 "path": "/tmp/tmp.OBQhAmcRBa" 00:15:23.897 } 00:15:23.897 } 00:15:23.897 ] 00:15:23.897 }, 00:15:23.897 { 00:15:23.897 "subsystem": "iobuf", 00:15:23.897 "config": [ 00:15:23.897 { 00:15:23.897 "method": "iobuf_set_options", 00:15:23.897 "params": { 00:15:23.897 "enable_numa": false, 00:15:23.897 "large_bufsize": 135168, 00:15:23.897 "large_pool_count": 1024, 00:15:23.897 "small_bufsize": 8192, 00:15:23.897 "small_pool_count": 8192 00:15:23.897 } 00:15:23.897 } 00:15:23.897 ] 00:15:23.897 }, 00:15:23.897 { 00:15:23.897 "subsystem": "sock", 00:15:23.897 "config": [ 00:15:23.897 { 00:15:23.897 "method": "sock_set_default_impl", 00:15:23.897 "params": { 00:15:23.897 "impl_name": "posix" 00:15:23.897 } 00:15:23.897 }, 00:15:23.897 { 00:15:23.897 "method": "sock_impl_set_options", 00:15:23.897 "params": { 00:15:23.897 "enable_ktls": false, 00:15:23.897 "enable_placement_id": 0, 00:15:23.897 "enable_quickack": false, 00:15:23.897 "enable_recv_pipe": true, 00:15:23.897 "enable_zerocopy_send_client": false, 00:15:23.897 "enable_zerocopy_send_server": true, 00:15:23.897 "impl_name": "ssl", 00:15:23.897 "recv_buf_size": 4096, 00:15:23.897 "send_buf_size": 4096, 00:15:23.897 "tls_version": 0, 00:15:23.897 "zerocopy_threshold": 0 00:15:23.897 } 00:15:23.897 }, 00:15:23.897 { 00:15:23.897 "method": "sock_impl_set_options", 00:15:23.897 "params": { 00:15:23.897 "enable_ktls": false, 00:15:23.897 "enable_placement_id": 0, 00:15:23.897 "enable_quickack": false, 00:15:23.897 "enable_recv_pipe": true, 00:15:23.897 "enable_zerocopy_send_client": false, 00:15:23.897 "enable_zerocopy_send_server": true, 00:15:23.897 "impl_name": "posix", 00:15:23.897 "recv_buf_size": 2097152, 00:15:23.897 "send_buf_size": 2097152, 00:15:23.897 "tls_version": 0, 00:15:23.897 "zerocopy_threshold": 0 00:15:23.897 } 00:15:23.897 } 00:15:23.897 ] 00:15:23.897 }, 00:15:23.897 { 00:15:23.897 "subsystem": "vmd", 00:15:23.897 "config": [] 00:15:23.897 }, 00:15:23.897 { 00:15:23.897 "subsystem": "accel", 00:15:23.897 "config": [ 00:15:23.897 { 00:15:23.897 "method": "accel_set_options", 00:15:23.897 "params": { 00:15:23.897 "buf_count": 2048, 00:15:23.897 "large_cache_size": 16, 00:15:23.897 "sequence_count": 2048, 00:15:23.897 "small_cache_size": 128, 00:15:23.897 "task_count": 2048 00:15:23.897 } 00:15:23.897 } 00:15:23.897 ] 00:15:23.897 }, 00:15:23.897 { 00:15:23.897 "subsystem": "bdev", 00:15:23.897 "config": [ 00:15:23.897 { 00:15:23.897 "method": "bdev_set_options", 00:15:23.897 "params": { 00:15:23.897 "bdev_auto_examine": true, 00:15:23.897 "bdev_io_cache_size": 256, 00:15:23.897 "bdev_io_pool_size": 65535, 00:15:23.897 "iobuf_large_cache_size": 16, 00:15:23.897 "iobuf_small_cache_size": 128 00:15:23.897 } 00:15:23.897 }, 00:15:23.897 { 00:15:23.897 "method": "bdev_raid_set_options", 00:15:23.897 "params": { 00:15:23.897 "process_max_bandwidth_mb_sec": 0, 00:15:23.897 "process_window_size_kb": 1024 00:15:23.897 } 00:15:23.897 }, 00:15:23.897 { 00:15:23.897 "method": "bdev_iscsi_set_options", 00:15:23.897 "params": { 00:15:23.897 "timeout_sec": 30 00:15:23.897 } 00:15:23.897 }, 00:15:23.897 { 00:15:23.897 "method": "bdev_nvme_set_options", 00:15:23.897 "params": { 00:15:23.897 "action_on_timeout": "none", 00:15:23.897 "allow_accel_sequence": false, 00:15:23.897 "arbitration_burst": 0, 00:15:23.897 "bdev_retry_count": 3, 00:15:23.897 "ctrlr_loss_timeout_sec": 0, 00:15:23.897 "delay_cmd_submit": true, 00:15:23.897 "dhchap_dhgroups": [ 00:15:23.897 "null", 00:15:23.897 "ffdhe2048", 00:15:23.897 "ffdhe3072", 00:15:23.897 "ffdhe4096", 00:15:23.897 "ffdhe6144", 00:15:23.897 "ffdhe8192" 00:15:23.897 ], 00:15:23.897 "dhchap_digests": [ 00:15:23.897 "sha256", 00:15:23.897 "sha384", 00:15:23.897 "sha512" 00:15:23.897 ], 00:15:23.897 "disable_auto_failback": false, 00:15:23.897 "fast_io_fail_timeout_sec": 0, 00:15:23.897 "generate_uuids": false, 00:15:23.897 "high_priority_weight": 0, 00:15:23.897 "io_path_stat": false, 00:15:23.897 "io_queue_requests": 512, 00:15:23.897 "keep_alive_timeout_ms": 10000, 00:15:23.897 "low_priority_weight": 0, 00:15:23.897 "medium_priority_weight": 0, 00:15:23.897 "nvme_adminq_poll_period_us": 10000, 00:15:23.897 "nvme_error_stat": false, 00:15:23.897 "nvme_ioq_poll_period_us": 0, 00:15:23.897 "rdma_cm_event_timeout_ms": 0, 00:15:23.897 "rdma_max_cq_size": 0, 00:15:23.897 "rdma_srq_size": 0, 00:15:23.897 "reconnect_delay_sec": 0, 00:15:23.897 "timeout_admin_us": 0, 00:15:23.897 "timeout_us": 0, 00:15:23.897 "transport_ack_timeout": 0, 00:15:23.897 "transport_retry_count": 4, 00:15:23.897 "transport_tos": 0 00:15:23.897 } 00:15:23.897 }, 00:15:23.897 { 00:15:23.897 "method": "bdev_nvme_attach_controller", 00:15:23.897 "params": { 00:15:23.897 "adrfam": "IPv4", 00:15:23.897 "ctrlr_loss_timeout_sec": 0, 00:15:23.897 "ddgst": false, 00:15:23.897 "fast_io_fail_timeout_sec": 0, 00:15:23.897 "hdgst": false, 00:15:23.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.897 "multipath": "multipath", 00:15:23.897 "name": "nvme0", 00:15:23.897 "prchk_guard": false, 00:15:23.897 "prchk_reftag": false, 00:15:23.897 "psk": "key0", 00:15:23.897 "reconnect_delay_sec": 0, 00:15:23.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.898 "traddr": "10.0.0.3", 00:15:23.898 "trsvcid": "4420", 00:15:23.898 "trtype": "TCP" 00:15:23.898 } 00:15:23.898 }, 00:15:23.898 { 00:15:23.898 "method": "bdev_nvme_set_hotplug", 00:15:23.898 "params": { 00:15:23.898 "enable": false, 00:15:23.898 "period_us": 100000 00:15:23.898 } 00:15:23.898 }, 00:15:23.898 { 00:15:23.898 "method": "bdev_enable_histogram", 00:15:23.898 "params": { 00:15:23.898 "enable": true, 00:15:23.898 "name": "nvme0n1" 00:15:23.898 } 00:15:23.898 }, 00:15:23.898 { 00:15:23.898 "method": "bdev_wait_for_examine" 00:15:23.898 } 00:15:23.898 ] 00:15:23.898 }, 00:15:23.898 { 00:15:23.898 "subsystem": "nbd", 00:15:23.898 "config": [] 00:15:23.898 } 00:15:23.898 ] 00:15:23.898 }' 00:15:23.898 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 83849 00:15:23.898 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83849 ']' 00:15:23.898 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83849 00:15:23.898 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:23.898 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.898 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83849 00:15:23.898 killing process with pid 83849 00:15:23.898 Received shutdown signal, test time was about 1.000000 seconds 00:15:23.898 00:15:23.898 Latency(us) 00:15:23.898 [2024-12-05T12:18:54.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.898 [2024-12-05T12:18:54.767Z] =================================================================================================================== 00:15:23.898 [2024-12-05T12:18:54.767Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.898 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:23.898 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:23.898 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83849' 00:15:23.898 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83849 00:15:23.898 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83849 00:15:24.156 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 83799 00:15:24.156 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83799 ']' 00:15:24.156 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83799 00:15:24.156 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:24.156 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.156 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83799 00:15:24.156 killing process with pid 83799 00:15:24.156 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:24.156 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:24.156 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83799' 00:15:24.156 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83799 00:15:24.156 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83799 00:15:24.413 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:24.413 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:24.413 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:24.413 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.413 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:24.413 "subsystems": [ 00:15:24.413 { 00:15:24.413 "subsystem": "keyring", 00:15:24.413 "config": [ 00:15:24.413 { 00:15:24.413 "method": "keyring_file_add_key", 00:15:24.413 "params": { 00:15:24.413 "name": "key0", 00:15:24.413 "path": "/tmp/tmp.OBQhAmcRBa" 00:15:24.413 } 00:15:24.413 } 00:15:24.413 ] 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "subsystem": "iobuf", 00:15:24.413 "config": [ 00:15:24.413 { 00:15:24.413 "method": "iobuf_set_options", 00:15:24.413 "params": { 00:15:24.413 "enable_numa": false, 00:15:24.413 "large_bufsize": 135168, 00:15:24.413 "large_pool_count": 1024, 00:15:24.413 "small_bufsize": 8192, 00:15:24.413 "small_pool_count": 8192 00:15:24.413 } 00:15:24.413 } 00:15:24.413 ] 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "subsystem": "sock", 00:15:24.413 "config": [ 00:15:24.413 { 00:15:24.413 "method": "sock_set_default_impl", 00:15:24.413 "params": { 00:15:24.413 "impl_name": "posix" 00:15:24.413 } 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "method": "sock_impl_set_options", 00:15:24.413 "params": { 00:15:24.413 "enable_ktls": false, 00:15:24.413 "enable_placement_id": 0, 00:15:24.413 "enable_quickack": false, 00:15:24.413 "enable_recv_pipe": true, 00:15:24.413 "enable_zerocopy_send_client": false, 00:15:24.413 "enable_zerocopy_send_server": true, 00:15:24.413 "impl_name": "ssl", 00:15:24.413 "recv_buf_size": 4096, 00:15:24.413 "send_buf_size": 4096, 00:15:24.413 "tls_version": 0, 00:15:24.413 "zerocopy_threshold": 0 00:15:24.413 } 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "method": "sock_impl_set_options", 00:15:24.413 "params": { 00:15:24.413 "enable_ktls": false, 00:15:24.413 "enable_placement_id": 0, 00:15:24.413 "enable_quickack": false, 00:15:24.413 "enable_recv_pipe": true, 00:15:24.413 "enable_zerocopy_send_client": false, 00:15:24.413 "enable_zerocopy_send_server": true, 00:15:24.413 "impl_name": "posix", 00:15:24.413 "recv_buf_size": 2097152, 00:15:24.413 "send_buf_size": 2097152, 00:15:24.413 "tls_version": 0, 00:15:24.413 "zerocopy_threshold": 0 00:15:24.413 } 00:15:24.413 } 00:15:24.413 ] 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "subsystem": "vmd", 00:15:24.413 "config": [] 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "subsystem": "accel", 00:15:24.413 "config": [ 00:15:24.413 { 00:15:24.413 "method": "accel_set_options", 00:15:24.413 "params": { 00:15:24.413 "buf_count": 2048, 00:15:24.413 "large_cache_size": 16, 00:15:24.413 "sequence_count": 2048, 00:15:24.413 "small_cache_size": 128, 00:15:24.413 "task_count": 2048 00:15:24.413 } 00:15:24.413 } 00:15:24.413 ] 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "subsystem": "bdev", 00:15:24.413 "config": [ 00:15:24.413 { 00:15:24.413 "method": "bdev_set_options", 00:15:24.413 "params": { 00:15:24.413 "bdev_auto_examine": true, 00:15:24.413 "bdev_io_cache_size": 256, 00:15:24.413 "bdev_io_pool_size": 65535, 00:15:24.413 "iobuf_large_cache_size": 16, 00:15:24.413 "iobuf_small_cache_size": 128 00:15:24.413 } 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "method": "bdev_raid_set_options", 00:15:24.413 "params": { 00:15:24.413 "process_max_bandwidth_mb_sec": 0, 00:15:24.413 "process_window_size_kb": 1024 00:15:24.413 } 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "method": "bdev_iscsi_set_options", 00:15:24.413 "params": { 00:15:24.413 "timeout_sec": 30 00:15:24.413 } 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "method": "bdev_nvme_set_options", 00:15:24.413 "params": { 00:15:24.413 "action_on_timeout": "none", 00:15:24.413 "allow_accel_sequence": false, 00:15:24.413 "arbitration_burst": 0, 00:15:24.413 "bdev_retry_count": 3, 00:15:24.413 "ctrlr_loss_timeout_sec": 0, 00:15:24.413 "delay_cmd_submit": true, 00:15:24.413 "dhchap_dhgroups": [ 00:15:24.413 "null", 00:15:24.413 "ffdhe2048", 00:15:24.413 "ffdhe3072", 00:15:24.413 "ffdhe4096", 00:15:24.413 "ffdhe6144", 00:15:24.413 "ffdhe8192" 00:15:24.413 ], 00:15:24.413 "dhchap_digests": [ 00:15:24.413 "sha256", 00:15:24.413 "sha384", 00:15:24.413 "sha512" 00:15:24.413 ], 00:15:24.413 "disable_auto_failback": false, 00:15:24.413 "fast_io_fail_timeout_sec": 0, 00:15:24.413 "generate_uuids": false, 00:15:24.413 "high_priority_weight": 0, 00:15:24.413 "io_path_stat": false, 00:15:24.413 "io_queue_requests": 0, 00:15:24.413 "keep_alive_timeout_ms": 10000, 00:15:24.413 "low_priority_weight": 0, 00:15:24.413 "medium_priority_weight": 0, 00:15:24.413 "nvme_adminq_poll_period_us": 10000, 00:15:24.413 "nvme_error_stat": false, 00:15:24.413 "nvme_ioq_poll_period_us": 0, 00:15:24.413 "rdma_cm_event_timeout_ms": 0, 00:15:24.413 "rdma_max_cq_size": 0, 00:15:24.413 "rdma_srq_size": 0, 00:15:24.413 "reconnect_delay_sec": 0, 00:15:24.413 "timeout_admin_us": 0, 00:15:24.413 "timeout_us": 0, 00:15:24.413 "transport_ack_timeout": 0, 00:15:24.413 "transport_retry_count": 4, 00:15:24.413 "transport_tos": 0 00:15:24.413 } 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "method": "bdev_nvme_set_hotplug", 00:15:24.413 "params": { 00:15:24.413 "enable": false, 00:15:24.413 "period_us": 100000 00:15:24.413 } 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "method": "bdev_malloc_create", 00:15:24.413 "params": { 00:15:24.413 "block_size": 4096, 00:15:24.413 "dif_is_head_of_md": false, 00:15:24.413 "dif_pi_format": 0, 00:15:24.413 "dif_type": 0, 00:15:24.413 "md_size": 0, 00:15:24.413 "name": "malloc0", 00:15:24.413 "num_blocks": 8192, 00:15:24.413 "optimal_io_boundary": 0, 00:15:24.413 "physical_block_size": 4096, 00:15:24.413 "uuid": "620461f0-1423-4875-afd2-51a34bc957fe" 00:15:24.413 } 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "method": "bdev_wait_for_examine" 00:15:24.413 } 00:15:24.413 ] 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "subsystem": "nbd", 00:15:24.413 "config": [] 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "subsystem": "scheduler", 00:15:24.413 "config": [ 00:15:24.413 { 00:15:24.413 "method": "framework_set_scheduler", 00:15:24.413 "params": { 00:15:24.413 "name": "static" 00:15:24.413 } 00:15:24.413 } 00:15:24.413 ] 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "subsystem": "nvmf", 00:15:24.413 "config": [ 00:15:24.413 { 00:15:24.413 "method": "nvmf_set_config", 00:15:24.413 "params": { 00:15:24.413 "admin_cmd_passthru": { 00:15:24.413 "identify_ctrlr": false 00:15:24.413 }, 00:15:24.413 "dhchap_dhgroups": [ 00:15:24.413 "null", 00:15:24.413 "ffdhe2048", 00:15:24.413 "ffdhe3072", 00:15:24.413 "ffdhe4096", 00:15:24.413 "ffdhe6144", 00:15:24.413 "ffdhe8192" 00:15:24.413 ], 00:15:24.413 "dhchap_digests": [ 00:15:24.413 "sha256", 00:15:24.413 "sha384", 00:15:24.413 "sha512" 00:15:24.413 ], 00:15:24.413 "discovery_filter": "match_any" 00:15:24.413 } 00:15:24.413 }, 00:15:24.413 { 00:15:24.413 "method": "nvmf_set_max_subsystems", 00:15:24.413 "params": { 00:15:24.413 "max_subsystems": 1024 00:15:24.414 } 00:15:24.414 }, 00:15:24.414 { 00:15:24.414 "method": "nvmf_set_crdt", 00:15:24.414 "params": { 00:15:24.414 "crdt1": 0, 00:15:24.414 "crdt2": 0, 00:15:24.414 "crdt3": 0 00:15:24.414 } 00:15:24.414 }, 00:15:24.414 { 00:15:24.414 "method": "nvmf_create_transport", 00:15:24.414 "params": { 00:15:24.414 "abort_timeout_sec": 1, 00:15:24.414 "ack_timeout": 0, 00:15:24.414 "buf_cache_size": 4294967295, 00:15:24.414 "c2h_success": false, 00:15:24.414 "data_wr_pool_size": 0, 00:15:24.414 "dif_insert_or_strip": false, 00:15:24.414 "in_capsule_data_size": 4096, 00:15:24.414 "io_unit_size": 131072, 00:15:24.414 "max_aq_depth": 128, 00:15:24.414 "max_io_qpairs_per_ctrlr": 127, 00:15:24.414 "max_io_size": 131072, 00:15:24.414 "max_queue_depth": 128, 00:15:24.414 "num_shared_buffers": 511, 00:15:24.414 "sock_priority": 0, 00:15:24.414 "trtype": "TCP", 00:15:24.414 "zcopy": false 00:15:24.414 } 00:15:24.414 }, 00:15:24.414 { 00:15:24.414 "method": "nvmf_create_subsystem", 00:15:24.414 "params": { 00:15:24.414 "allow_any_host": false, 00:15:24.414 "ana_reporting": false, 00:15:24.414 "max_cntlid": 65519, 00:15:24.414 "max_namespaces": 32, 00:15:24.414 "min_cntlid": 1, 00:15:24.414 "model_number": "SPDK bdev Controller", 00:15:24.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.414 "serial_number": "00000000000000000000" 00:15:24.414 } 00:15:24.414 }, 00:15:24.414 { 00:15:24.414 "method": "nvmf_subsystem_add_host", 00:15:24.414 "params": { 00:15:24.414 "host": "nqn.2016-06.io.spdk:host1", 00:15:24.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.414 "psk": "key0" 00:15:24.414 } 00:15:24.414 }, 00:15:24.414 { 00:15:24.414 "method": "nvmf_subsystem_add_ns", 00:15:24.414 "params": { 00:15:24.414 "namespace": { 00:15:24.414 "bdev_name": "malloc0", 00:15:24.414 "nguid": "620461F014234875AFD251A34BC957FE", 00:15:24.414 "no_auto_visible": false, 00:15:24.414 "nsid": 1, 00:15:24.414 "uuid": "620461f0-1423-4875-afd2-51a34bc957fe" 00:15:24.414 }, 00:15:24.414 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:24.414 } 00:15:24.414 }, 00:15:24.414 { 00:15:24.414 "method": "nvmf_subsystem_add_listener", 00:15:24.414 "params": { 00:15:24.414 "listen_address": { 00:15:24.414 "adrfam": "IPv4", 00:15:24.414 "traddr": "10.0.0.3", 00:15:24.414 "trsvcid": "4420", 00:15:24.414 "trtype": "TCP" 00:15:24.414 }, 00:15:24.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.414 "secure_channel": false, 00:15:24.414 "sock_impl": "ssl" 00:15:24.414 } 00:15:24.414 } 00:15:24.414 ] 00:15:24.414 } 00:15:24.414 ] 00:15:24.414 }' 00:15:24.414 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83940 00:15:24.414 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:24.414 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83940 00:15:24.414 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83940 ']' 00:15:24.414 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.414 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.414 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.414 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.414 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.414 [2024-12-05 12:18:55.162542] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:24.414 [2024-12-05 12:18:55.162624] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.670 [2024-12-05 12:18:55.299521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.670 [2024-12-05 12:18:55.338200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.671 [2024-12-05 12:18:55.338270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.671 [2024-12-05 12:18:55.338292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.671 [2024-12-05 12:18:55.338300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.671 [2024-12-05 12:18:55.338307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.671 [2024-12-05 12:18:55.338726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.959 [2024-12-05 12:18:55.570998] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.959 [2024-12-05 12:18:55.602933] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:24.959 [2024-12-05 12:18:55.603131] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=83985 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 83985 /var/tmp/bdevperf.sock 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83985 ']' 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.524 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:25.524 "subsystems": [ 00:15:25.524 { 00:15:25.524 "subsystem": "keyring", 00:15:25.524 "config": [ 00:15:25.524 { 00:15:25.524 "method": "keyring_file_add_key", 00:15:25.524 "params": { 00:15:25.524 "name": "key0", 00:15:25.524 "path": "/tmp/tmp.OBQhAmcRBa" 00:15:25.525 } 00:15:25.525 } 00:15:25.525 ] 00:15:25.525 }, 00:15:25.525 { 00:15:25.525 "subsystem": "iobuf", 00:15:25.525 "config": [ 00:15:25.525 { 00:15:25.525 "method": "iobuf_set_options", 00:15:25.525 "params": { 00:15:25.525 "enable_numa": false, 00:15:25.525 "large_bufsize": 135168, 00:15:25.525 "large_pool_count": 1024, 00:15:25.525 "small_bufsize": 8192, 00:15:25.525 "small_pool_count": 8192 00:15:25.525 } 00:15:25.525 } 00:15:25.525 ] 00:15:25.525 }, 00:15:25.525 { 00:15:25.525 "subsystem": "sock", 00:15:25.525 "config": [ 00:15:25.525 { 00:15:25.525 "method": "sock_set_default_impl", 00:15:25.525 "params": { 00:15:25.525 "impl_name": "posix" 00:15:25.525 } 00:15:25.525 }, 00:15:25.525 { 00:15:25.525 "method": "sock_impl_set_options", 00:15:25.525 "params": { 00:15:25.525 "enable_ktls": false, 00:15:25.525 "enable_placement_id": 0, 00:15:25.525 "enable_quickack": false, 00:15:25.525 "enable_recv_pipe": true, 00:15:25.525 "enable_zerocopy_send_client": false, 00:15:25.525 "enable_zerocopy_send_server": true, 00:15:25.525 "impl_name": "ssl", 00:15:25.525 "recv_buf_size": 4096, 00:15:25.525 "send_buf_size": 4096, 00:15:25.525 "tls_version": 0, 00:15:25.525 "zerocopy_threshold": 0 00:15:25.525 } 00:15:25.525 }, 00:15:25.525 { 00:15:25.525 "method": "sock_impl_set_options", 00:15:25.525 "params": { 00:15:25.525 "enable_ktls": false, 00:15:25.525 "enable_placement_id": 0, 00:15:25.525 "enable_quickack": false, 00:15:25.525 "enable_recv_pipe": true, 00:15:25.525 "enable_zerocopy_send_client": false, 00:15:25.525 "enable_zerocopy_send_server": true, 00:15:25.525 "impl_name": "posix", 00:15:25.525 "recv_buf_size": 2097152, 00:15:25.525 "send_buf_size": 2097152, 00:15:25.525 "tls_version": 0, 00:15:25.525 "zerocopy_threshold": 0 00:15:25.525 } 00:15:25.525 } 00:15:25.525 ] 00:15:25.525 }, 00:15:25.525 { 00:15:25.525 "subsystem": "vmd", 00:15:25.525 "config": [] 00:15:25.525 }, 00:15:25.525 { 00:15:25.525 "subsystem": "accel", 00:15:25.525 "config": [ 00:15:25.525 { 00:15:25.525 "method": "accel_set_options", 00:15:25.525 "params": { 00:15:25.525 "buf_count": 2048, 00:15:25.525 "large_cache_size": 16, 00:15:25.525 "sequence_count": 2048, 00:15:25.525 "small_cache_size": 128, 00:15:25.525 "task_count": 2048 00:15:25.525 } 00:15:25.525 } 00:15:25.525 ] 00:15:25.525 }, 00:15:25.525 { 00:15:25.525 "subsystem": "bdev", 00:15:25.525 "config": [ 00:15:25.525 { 00:15:25.525 "method": "bdev_set_options", 00:15:25.525 "params": { 00:15:25.525 "bdev_auto_examine": true, 00:15:25.525 "bdev_io_cache_size": 256, 00:15:25.525 "bdev_io_pool_size": 65535, 00:15:25.525 "iobuf_large_cache_size": 16, 00:15:25.525 "iobuf_small_cache_size": 128 00:15:25.525 } 00:15:25.525 }, 00:15:25.525 { 00:15:25.525 "method": "bdev_raid_set_options", 00:15:25.525 "params": { 00:15:25.525 "process_max_bandwidth_mb_sec": 0, 00:15:25.525 "process_window_size_kb": 1024 00:15:25.525 } 00:15:25.525 }, 00:15:25.525 { 00:15:25.525 "method": "bdev_iscsi_set_options", 00:15:25.525 "params": { 00:15:25.525 "timeout_sec": 30 00:15:25.525 } 00:15:25.525 }, 00:15:25.525 { 00:15:25.525 "method": "bdev_nvme_set_options", 00:15:25.525 "params": { 00:15:25.525 "action_on_timeout": "none", 00:15:25.525 "allow_accel_sequence": false, 00:15:25.525 "arbitration_burst": 0, 00:15:25.525 "bdev_retry_count": 3, 00:15:25.525 "ctrlr_loss_timeout_sec": 0, 00:15:25.525 "delay_cmd_submit": true, 00:15:25.525 "dhchap_dhgroups": [ 00:15:25.525 "null", 00:15:25.525 "ffdhe2048", 00:15:25.525 "ffdhe3072", 00:15:25.525 "ffdhe4096", 00:15:25.525 "ffdhe6144", 00:15:25.525 "ffdhe8192" 00:15:25.525 ], 00:15:25.525 "dhchap_digests": [ 00:15:25.525 "sha256", 00:15:25.525 "sha384", 00:15:25.525 "sha512" 00:15:25.525 ], 00:15:25.525 "disable_auto_failback": false, 00:15:25.525 "fast_io_fail_timeout_sec": 0, 00:15:25.525 "generate_uuids": false, 00:15:25.525 "high_priority_weight": 0, 00:15:25.525 "io_path_stat": false, 00:15:25.525 "io_queue_requests": 512, 00:15:25.525 "keep_alive_timeout_ms": 10000, 00:15:25.525 "low_priority_weight": 0, 00:15:25.525 "medium_priority_weight": 0, 00:15:25.525 "nvme_adminq_poll_period_us": 10000, 00:15:25.525 "nvme_error_stat": false, 00:15:25.525 "nvme_ioq_poll_period_us": 0, 00:15:25.525 "rdma_cm_event_timeout_ms": 0, 00:15:25.525 "rdma_max_cq_size": 0, 00:15:25.525 "rdma_srq_size": 0, 00:15:25.525 "reconnect_delay_sec": 0, 00:15:25.525 "timeout_admin_us": 0, 00:15:25.525 "timeout_us": 0, 00:15:25.525 "transport_ack_timeout": 0, 00:15:25.525 "transport_retry_count": 4, 00:15:25.525 "transport_tos": 0 00:15:25.525 } 00:15:25.525 }, 00:15:25.525 { 00:15:25.525 "method": "bdev_nvme_attach_controller", 00:15:25.525 "params": { 00:15:25.525 "adrfam": "IPv4", 00:15:25.525 "ctrlr_loss_timeout_sec": 0, 00:15:25.525 "ddgst": false, 00:15:25.525 "fast_io_fail_timeout_sec": 0, 00:15:25.525 "hdgst": false, 00:15:25.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.525 "multipath": "multipath", 00:15:25.525 "name": "nvme0", 00:15:25.525 "prchk_guard": false, 00:15:25.525 "prchk_reftag": false, 00:15:25.525 "psk": "key0", 00:15:25.525 "reconnect_delay_sec": 0, 00:15:25.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.525 "traddr": "10.0.0.3", 00:15:25.525 "trsvcid": "4420", 00:15:25.525 "trtype": "TCP" 00:15:25.525 } 00:15:25.525 }, 00:15:25.525 { 00:15:25.525 "method": "bdev_nvme_set_hotplug", 00:15:25.525 "params": { 00:15:25.525 "enable": false, 00:15:25.526 "period_us": 100000 00:15:25.526 } 00:15:25.526 }, 00:15:25.526 { 00:15:25.526 "method": "bdev_enable_histogram", 00:15:25.526 "params": { 00:15:25.526 "enable": true, 00:15:25.526 "name": "nvme0n1" 00:15:25.526 } 00:15:25.526 }, 00:15:25.526 { 00:15:25.526 "method": "bdev_wait_for_examine" 00:15:25.526 } 00:15:25.526 ] 00:15:25.526 }, 00:15:25.526 { 00:15:25.526 "subsystem": "nbd", 00:15:25.526 "config": [] 00:15:25.526 } 00:15:25.526 ] 00:15:25.526 }' 00:15:25.526 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:25.526 [2024-12-05 12:18:56.220385] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:25.526 [2024-12-05 12:18:56.220467] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83985 ] 00:15:25.526 [2024-12-05 12:18:56.365852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.783 [2024-12-05 12:18:56.419486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.783 [2024-12-05 12:18:56.600545] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:26.347 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.347 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:26.347 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:26.347 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:26.604 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.604 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:26.861 Running I/O for 1 seconds... 00:15:27.796 4803.00 IOPS, 18.76 MiB/s 00:15:27.796 Latency(us) 00:15:27.796 [2024-12-05T12:18:58.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.796 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.796 Verification LBA range: start 0x0 length 0x2000 00:15:27.796 nvme0n1 : 1.01 4858.63 18.98 0.00 0.00 26119.41 5600.35 19899.11 00:15:27.796 [2024-12-05T12:18:58.665Z] =================================================================================================================== 00:15:27.796 [2024-12-05T12:18:58.665Z] Total : 4858.63 18.98 0.00 0.00 26119.41 5600.35 19899.11 00:15:27.796 { 00:15:27.796 "results": [ 00:15:27.796 { 00:15:27.796 "job": "nvme0n1", 00:15:27.796 "core_mask": "0x2", 00:15:27.796 "workload": "verify", 00:15:27.796 "status": "finished", 00:15:27.796 "verify_range": { 00:15:27.796 "start": 0, 00:15:27.796 "length": 8192 00:15:27.796 }, 00:15:27.796 "queue_depth": 128, 00:15:27.796 "io_size": 4096, 00:15:27.796 "runtime": 1.014895, 00:15:27.796 "iops": 4858.6306957862635, 00:15:27.796 "mibps": 18.97902615541509, 00:15:27.796 "io_failed": 0, 00:15:27.796 "io_timeout": 0, 00:15:27.796 "avg_latency_us": 26119.41442377537, 00:15:27.796 "min_latency_us": 5600.349090909091, 00:15:27.796 "max_latency_us": 19899.112727272728 00:15:27.796 } 00:15:27.796 ], 00:15:27.796 "core_count": 1 00:15:27.796 } 00:15:27.796 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:27.796 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:27.796 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:27.796 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:15:27.796 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:15:27.797 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:27.797 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:27.797 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:27.797 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:27.797 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:27.797 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:27.797 nvmf_trace.0 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 83985 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83985 ']' 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83985 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83985 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83985' 00:15:28.056 killing process with pid 83985 00:15:28.056 Received shutdown signal, test time was about 1.000000 seconds 00:15:28.056 00:15:28.056 Latency(us) 00:15:28.056 [2024-12-05T12:18:58.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.056 [2024-12-05T12:18:58.925Z] =================================================================================================================== 00:15:28.056 [2024-12-05T12:18:58.925Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83985 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83985 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:28.056 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:28.315 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:28.315 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:28.315 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:28.315 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:28.315 rmmod nvme_tcp 00:15:28.315 rmmod nvme_fabrics 00:15:28.315 rmmod nvme_keyring 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 83940 ']' 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 83940 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83940 ']' 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83940 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83940 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.315 killing process with pid 83940 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83940' 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83940 00:15:28.315 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83940 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:28.573 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:28.574 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:28.574 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:28.574 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:28.574 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Uqy8kpGKhG /tmp/tmp.RXQYYKUNom /tmp/tmp.OBQhAmcRBa 00:15:28.893 00:15:28.893 real 1m20.879s 00:15:28.893 user 2m9.516s 00:15:28.893 sys 0m27.681s 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.893 ************************************ 00:15:28.893 END TEST nvmf_tls 00:15:28.893 ************************************ 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.893 ************************************ 00:15:28.893 START TEST nvmf_fips 00:15:28.893 ************************************ 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:28.893 * Looking for test storage... 00:15:28.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:28.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.893 --rc genhtml_branch_coverage=1 00:15:28.893 --rc genhtml_function_coverage=1 00:15:28.893 --rc genhtml_legend=1 00:15:28.893 --rc geninfo_all_blocks=1 00:15:28.893 --rc geninfo_unexecuted_blocks=1 00:15:28.893 00:15:28.893 ' 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:28.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.893 --rc genhtml_branch_coverage=1 00:15:28.893 --rc genhtml_function_coverage=1 00:15:28.893 --rc genhtml_legend=1 00:15:28.893 --rc geninfo_all_blocks=1 00:15:28.893 --rc geninfo_unexecuted_blocks=1 00:15:28.893 00:15:28.893 ' 00:15:28.893 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:28.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.893 --rc genhtml_branch_coverage=1 00:15:28.893 --rc genhtml_function_coverage=1 00:15:28.893 --rc genhtml_legend=1 00:15:28.894 --rc geninfo_all_blocks=1 00:15:28.894 --rc geninfo_unexecuted_blocks=1 00:15:28.894 00:15:28.894 ' 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:28.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.894 --rc genhtml_branch_coverage=1 00:15:28.894 --rc genhtml_function_coverage=1 00:15:28.894 --rc genhtml_legend=1 00:15:28.894 --rc geninfo_all_blocks=1 00:15:28.894 --rc geninfo_unexecuted_blocks=1 00:15:28.894 00:15:28.894 ' 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.894 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:29.164 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.164 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:29.164 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:29.164 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:29.164 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.164 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:29.165 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:15:29.165 Error setting digest 00:15:29.165 40327C83BB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:29.165 40327C83BB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.165 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:29.166 Cannot find device "nvmf_init_br" 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:29.166 Cannot find device "nvmf_init_br2" 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:29.166 Cannot find device "nvmf_tgt_br" 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.166 Cannot find device "nvmf_tgt_br2" 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:29.166 Cannot find device "nvmf_init_br" 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:29.166 Cannot find device "nvmf_init_br2" 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:29.166 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:29.166 Cannot find device "nvmf_tgt_br" 00:15:29.166 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:29.166 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:29.166 Cannot find device "nvmf_tgt_br2" 00:15:29.166 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:29.166 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:29.166 Cannot find device "nvmf_br" 00:15:29.166 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:29.166 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:29.425 Cannot find device "nvmf_init_if" 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:29.425 Cannot find device "nvmf_init_if2" 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:29.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:29.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:29.425 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:29.425 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:29.425 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:29.425 00:15:29.425 --- 10.0.0.3 ping statistics --- 00:15:29.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.425 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:29.426 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:29.426 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:29.426 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:15:29.426 00:15:29.426 --- 10.0.0.4 ping statistics --- 00:15:29.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.426 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:29.426 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:29.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:29.426 00:15:29.426 --- 10.0.0.1 ping statistics --- 00:15:29.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.426 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:29.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:29.684 00:15:29.684 --- 10.0.0.2 ping statistics --- 00:15:29.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.684 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=84327 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 84327 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84327 ']' 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.684 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:29.684 [2024-12-05 12:19:00.419222] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:29.684 [2024-12-05 12:19:00.419325] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.941 [2024-12-05 12:19:00.573446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.941 [2024-12-05 12:19:00.628187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.941 [2024-12-05 12:19:00.628245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.941 [2024-12-05 12:19:00.628259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.941 [2024-12-05 12:19:00.628270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.941 [2024-12-05 12:19:00.628295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.941 [2024-12-05 12:19:00.628733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.wXA 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.wXA 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.wXA 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.wXA 00:15:30.874 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:30.874 [2024-12-05 12:19:01.702854] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.874 [2024-12-05 12:19:01.718807] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:30.874 [2024-12-05 12:19:01.719013] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:31.132 malloc0 00:15:31.132 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:31.132 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84386 00:15:31.132 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:31.132 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84386 /var/tmp/bdevperf.sock 00:15:31.132 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84386 ']' 00:15:31.132 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:31.132 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:31.132 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:31.132 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.132 12:19:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:31.132 [2024-12-05 12:19:01.869071] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:31.133 [2024-12-05 12:19:01.869170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84386 ] 00:15:31.392 [2024-12-05 12:19:02.013348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.392 [2024-12-05 12:19:02.062536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.392 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.392 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:31.392 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.wXA 00:15:31.650 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:31.909 [2024-12-05 12:19:02.664925] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:31.909 TLSTESTn1 00:15:31.909 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:32.168 Running I/O for 10 seconds... 00:15:34.042 4600.00 IOPS, 17.97 MiB/s [2024-12-05T12:19:06.284Z] 4548.00 IOPS, 17.77 MiB/s [2024-12-05T12:19:07.216Z] 4576.33 IOPS, 17.88 MiB/s [2024-12-05T12:19:08.151Z] 4615.50 IOPS, 18.03 MiB/s [2024-12-05T12:19:09.086Z] 4655.00 IOPS, 18.18 MiB/s [2024-12-05T12:19:10.022Z] 4679.33 IOPS, 18.28 MiB/s [2024-12-05T12:19:10.959Z] 4696.00 IOPS, 18.34 MiB/s [2024-12-05T12:19:11.893Z] 4705.25 IOPS, 18.38 MiB/s [2024-12-05T12:19:13.272Z] 4715.56 IOPS, 18.42 MiB/s [2024-12-05T12:19:13.272Z] 4719.50 IOPS, 18.44 MiB/s 00:15:42.403 Latency(us) 00:15:42.403 [2024-12-05T12:19:13.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.403 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:42.403 Verification LBA range: start 0x0 length 0x2000 00:15:42.403 TLSTESTn1 : 10.01 4725.65 18.46 0.00 0.00 27042.01 5272.67 19779.96 00:15:42.403 [2024-12-05T12:19:13.272Z] =================================================================================================================== 00:15:42.403 [2024-12-05T12:19:13.272Z] Total : 4725.65 18.46 0.00 0.00 27042.01 5272.67 19779.96 00:15:42.403 { 00:15:42.403 "results": [ 00:15:42.403 { 00:15:42.403 "job": "TLSTESTn1", 00:15:42.403 "core_mask": "0x4", 00:15:42.403 "workload": "verify", 00:15:42.403 "status": "finished", 00:15:42.403 "verify_range": { 00:15:42.403 "start": 0, 00:15:42.403 "length": 8192 00:15:42.403 }, 00:15:42.403 "queue_depth": 128, 00:15:42.403 "io_size": 4096, 00:15:42.403 "runtime": 10.013444, 00:15:42.403 "iops": 4725.6468403877825, 00:15:42.403 "mibps": 18.459557970264775, 00:15:42.403 "io_failed": 0, 00:15:42.403 "io_timeout": 0, 00:15:42.403 "avg_latency_us": 27042.01359256128, 00:15:42.403 "min_latency_us": 5272.669090909091, 00:15:42.403 "max_latency_us": 19779.956363636364 00:15:42.403 } 00:15:42.403 ], 00:15:42.403 "core_count": 1 00:15:42.403 } 00:15:42.403 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:42.403 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:42.403 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:15:42.403 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:15:42.403 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:42.403 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:42.403 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:42.403 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:42.403 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:42.403 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:42.403 nvmf_trace.0 00:15:42.403 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:15:42.403 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84386 00:15:42.403 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84386 ']' 00:15:42.403 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84386 00:15:42.403 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:42.403 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.403 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84386 00:15:42.403 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:42.403 killing process with pid 84386 00:15:42.403 Received shutdown signal, test time was about 10.000000 seconds 00:15:42.403 00:15:42.403 Latency(us) 00:15:42.403 [2024-12-05T12:19:13.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.403 [2024-12-05T12:19:13.272Z] =================================================================================================================== 00:15:42.403 [2024-12-05T12:19:13.272Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:42.403 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:42.403 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84386' 00:15:42.403 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84386 00:15:42.403 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84386 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:42.662 rmmod nvme_tcp 00:15:42.662 rmmod nvme_fabrics 00:15:42.662 rmmod nvme_keyring 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 84327 ']' 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 84327 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84327 ']' 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84327 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84327 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:42.662 killing process with pid 84327 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84327' 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84327 00:15:42.662 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84327 00:15:42.920 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:42.921 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.wXA 00:15:43.180 00:15:43.180 real 0m14.340s 00:15:43.180 user 0m18.470s 00:15:43.180 sys 0m6.399s 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:43.180 ************************************ 00:15:43.180 END TEST nvmf_fips 00:15:43.180 ************************************ 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:43.180 ************************************ 00:15:43.180 START TEST nvmf_control_msg_list 00:15:43.180 ************************************ 00:15:43.180 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:43.180 * Looking for test storage... 00:15:43.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:43.180 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:43.180 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:15:43.180 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:43.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.468 --rc genhtml_branch_coverage=1 00:15:43.468 --rc genhtml_function_coverage=1 00:15:43.468 --rc genhtml_legend=1 00:15:43.468 --rc geninfo_all_blocks=1 00:15:43.468 --rc geninfo_unexecuted_blocks=1 00:15:43.468 00:15:43.468 ' 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:43.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.468 --rc genhtml_branch_coverage=1 00:15:43.468 --rc genhtml_function_coverage=1 00:15:43.468 --rc genhtml_legend=1 00:15:43.468 --rc geninfo_all_blocks=1 00:15:43.468 --rc geninfo_unexecuted_blocks=1 00:15:43.468 00:15:43.468 ' 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:43.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.468 --rc genhtml_branch_coverage=1 00:15:43.468 --rc genhtml_function_coverage=1 00:15:43.468 --rc genhtml_legend=1 00:15:43.468 --rc geninfo_all_blocks=1 00:15:43.468 --rc geninfo_unexecuted_blocks=1 00:15:43.468 00:15:43.468 ' 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:43.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.468 --rc genhtml_branch_coverage=1 00:15:43.468 --rc genhtml_function_coverage=1 00:15:43.468 --rc genhtml_legend=1 00:15:43.468 --rc geninfo_all_blocks=1 00:15:43.468 --rc geninfo_unexecuted_blocks=1 00:15:43.468 00:15:43.468 ' 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:43.468 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:43.469 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:43.469 Cannot find device "nvmf_init_br" 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:43.469 Cannot find device "nvmf_init_br2" 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:43.469 Cannot find device "nvmf_tgt_br" 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:43.469 Cannot find device "nvmf_tgt_br2" 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:43.469 Cannot find device "nvmf_init_br" 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:43.469 Cannot find device "nvmf_init_br2" 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:43.469 Cannot find device "nvmf_tgt_br" 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:43.469 Cannot find device "nvmf_tgt_br2" 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:43.469 Cannot find device "nvmf_br" 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:43.469 Cannot find device "nvmf_init_if" 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:43.469 Cannot find device "nvmf_init_if2" 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:43.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:43.469 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:43.469 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:43.470 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:43.470 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:43.470 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:43.729 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:43.730 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:43.730 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:15:43.730 00:15:43.730 --- 10.0.0.3 ping statistics --- 00:15:43.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.730 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:43.730 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:43.730 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:15:43.730 00:15:43.730 --- 10.0.0.4 ping statistics --- 00:15:43.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.730 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:43.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:43.730 00:15:43.730 --- 10.0.0.1 ping statistics --- 00:15:43.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.730 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:43.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:15:43.730 00:15:43.730 --- 10.0.0.2 ping statistics --- 00:15:43.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.730 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=84786 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 84786 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 84786 ']' 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.730 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:43.989 [2024-12-05 12:19:14.612009] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:43.989 [2024-12-05 12:19:14.612107] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.989 [2024-12-05 12:19:14.766375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.989 [2024-12-05 12:19:14.830472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.989 [2024-12-05 12:19:14.830545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.989 [2024-12-05 12:19:14.830560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.989 [2024-12-05 12:19:14.830572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.989 [2024-12-05 12:19:14.830582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.989 [2024-12-05 12:19:14.831106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:44.249 [2024-12-05 12:19:15.052656] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:44.249 Malloc0 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:44.249 [2024-12-05 12:19:15.094682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=84817 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=84818 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=84819 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:44.249 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 84817 00:15:44.506 [2024-12-05 12:19:15.283023] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:44.507 [2024-12-05 12:19:15.293470] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:44.507 [2024-12-05 12:19:15.293682] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:45.440 Initializing NVMe Controllers 00:15:45.440 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:45.440 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:45.440 Initialization complete. Launching workers. 00:15:45.440 ======================================================== 00:15:45.440 Latency(us) 00:15:45.440 Device Information : IOPS MiB/s Average min max 00:15:45.440 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3950.00 15.43 252.86 123.87 747.48 00:15:45.440 ======================================================== 00:15:45.440 Total : 3950.00 15.43 252.86 123.87 747.48 00:15:45.440 00:15:45.699 Initializing NVMe Controllers 00:15:45.699 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:45.699 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:45.699 Initialization complete. Launching workers. 00:15:45.699 ======================================================== 00:15:45.699 Latency(us) 00:15:45.699 Device Information : IOPS MiB/s Average min max 00:15:45.699 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3903.00 15.25 255.85 138.42 1651.79 00:15:45.699 ======================================================== 00:15:45.699 Total : 3903.00 15.25 255.85 138.42 1651.79 00:15:45.699 00:15:45.699 Initializing NVMe Controllers 00:15:45.699 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:45.699 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:45.699 Initialization complete. Launching workers. 00:15:45.699 ======================================================== 00:15:45.699 Latency(us) 00:15:45.699 Device Information : IOPS MiB/s Average min max 00:15:45.699 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3902.18 15.24 255.81 143.71 762.81 00:15:45.699 ======================================================== 00:15:45.699 Total : 3902.18 15.24 255.81 143.71 762.81 00:15:45.699 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 84818 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 84819 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:45.699 rmmod nvme_tcp 00:15:45.699 rmmod nvme_fabrics 00:15:45.699 rmmod nvme_keyring 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 84786 ']' 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 84786 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 84786 ']' 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 84786 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84786 00:15:45.699 killing process with pid 84786 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84786' 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 84786 00:15:45.699 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 84786 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:45.958 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:46.217 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:46.217 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:46.217 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.217 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.217 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:46.217 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.217 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.217 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.217 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:46.217 00:15:46.217 real 0m3.034s 00:15:46.217 user 0m4.710s 00:15:46.217 sys 0m1.485s 00:15:46.217 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.217 ************************************ 00:15:46.217 END TEST nvmf_control_msg_list 00:15:46.217 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:46.217 ************************************ 00:15:46.217 12:19:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:46.217 12:19:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:46.217 12:19:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.217 12:19:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:46.217 ************************************ 00:15:46.217 START TEST nvmf_wait_for_buf 00:15:46.217 ************************************ 00:15:46.217 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:46.477 * Looking for test storage... 00:15:46.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:46.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.477 --rc genhtml_branch_coverage=1 00:15:46.477 --rc genhtml_function_coverage=1 00:15:46.477 --rc genhtml_legend=1 00:15:46.477 --rc geninfo_all_blocks=1 00:15:46.477 --rc geninfo_unexecuted_blocks=1 00:15:46.477 00:15:46.477 ' 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:46.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.477 --rc genhtml_branch_coverage=1 00:15:46.477 --rc genhtml_function_coverage=1 00:15:46.477 --rc genhtml_legend=1 00:15:46.477 --rc geninfo_all_blocks=1 00:15:46.477 --rc geninfo_unexecuted_blocks=1 00:15:46.477 00:15:46.477 ' 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:46.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.477 --rc genhtml_branch_coverage=1 00:15:46.477 --rc genhtml_function_coverage=1 00:15:46.477 --rc genhtml_legend=1 00:15:46.477 --rc geninfo_all_blocks=1 00:15:46.477 --rc geninfo_unexecuted_blocks=1 00:15:46.477 00:15:46.477 ' 00:15:46.477 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:46.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.477 --rc genhtml_branch_coverage=1 00:15:46.478 --rc genhtml_function_coverage=1 00:15:46.478 --rc genhtml_legend=1 00:15:46.478 --rc geninfo_all_blocks=1 00:15:46.478 --rc geninfo_unexecuted_blocks=1 00:15:46.478 00:15:46.478 ' 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:46.478 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:46.478 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:46.479 Cannot find device "nvmf_init_br" 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:46.479 Cannot find device "nvmf_init_br2" 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:46.479 Cannot find device "nvmf_tgt_br" 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.479 Cannot find device "nvmf_tgt_br2" 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:46.479 Cannot find device "nvmf_init_br" 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:46.479 Cannot find device "nvmf_init_br2" 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:46.479 Cannot find device "nvmf_tgt_br" 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:46.479 Cannot find device "nvmf_tgt_br2" 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:46.479 Cannot find device "nvmf_br" 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:46.479 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:46.738 Cannot find device "nvmf_init_if" 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:46.738 Cannot find device "nvmf_init_if2" 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:46.738 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.738 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:15:46.738 00:15:46.738 --- 10.0.0.3 ping statistics --- 00:15:46.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.738 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:46.738 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:46.738 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:15:46.738 00:15:46.738 --- 10.0.0.4 ping statistics --- 00:15:46.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.738 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:46.738 00:15:46.738 --- 10.0.0.1 ping statistics --- 00:15:46.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.738 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:46.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:15:46.738 00:15:46.738 --- 10.0.0.2 ping statistics --- 00:15:46.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.738 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:46.738 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.739 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:15:46.739 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:46.739 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.739 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:46.739 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:46.739 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.739 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:46.739 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:46.997 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:46.997 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:46.998 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:46.998 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:46.998 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=85061 00:15:46.998 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:46.998 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 85061 00:15:46.998 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 85061 ']' 00:15:46.998 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.998 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.998 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.998 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.998 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:46.998 [2024-12-05 12:19:17.669465] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:46.998 [2024-12-05 12:19:17.669525] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.998 [2024-12-05 12:19:17.810386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.998 [2024-12-05 12:19:17.858401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.998 [2024-12-05 12:19:17.858483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.998 [2024-12-05 12:19:17.858512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.998 [2024-12-05 12:19:17.858534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.998 [2024-12-05 12:19:17.858541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.998 [2024-12-05 12:19:17.858931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.259 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.259 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:15:47.259 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:47.259 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:47.259 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:47.259 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.259 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:47.259 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:47.259 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:47.259 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.259 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:47.260 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.260 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:47.260 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.260 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:47.260 Malloc0 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:47.260 [2024-12-05 12:19:18.101712] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.260 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:47.518 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.518 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:47.518 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.518 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:47.518 [2024-12-05 12:19:18.133793] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:47.518 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.518 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:47.518 [2024-12-05 12:19:18.334382] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:48.893 Initializing NVMe Controllers 00:15:48.893 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:48.893 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:48.893 Initialization complete. Launching workers. 00:15:48.893 ======================================================== 00:15:48.893 Latency(us) 00:15:48.893 Device Information : IOPS MiB/s Average min max 00:15:48.893 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 135.46 16.93 30588.01 8020.73 64007.78 00:15:48.893 ======================================================== 00:15:48.893 Total : 135.46 16.93 30588.01 8020.73 64007.78 00:15:48.893 00:15:48.893 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:48.893 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.893 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:48.893 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:48.893 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.893 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2150 00:15:48.893 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2150 -eq 0 ]] 00:15:48.893 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:48.893 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:48.893 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:48.893 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:48.893 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:48.894 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:48.894 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:48.894 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:48.894 rmmod nvme_tcp 00:15:49.152 rmmod nvme_fabrics 00:15:49.152 rmmod nvme_keyring 00:15:49.152 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:49.152 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:49.152 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:49.152 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 85061 ']' 00:15:49.152 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 85061 00:15:49.152 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 85061 ']' 00:15:49.152 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 85061 00:15:49.152 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:15:49.152 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.152 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85061 00:15:49.152 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.152 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.152 killing process with pid 85061 00:15:49.153 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85061' 00:15:49.153 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 85061 00:15:49.153 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 85061 00:15:49.153 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:49.153 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:49.153 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:49.411 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.412 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:49.671 ************************************ 00:15:49.671 END TEST nvmf_wait_for_buf 00:15:49.671 ************************************ 00:15:49.671 00:15:49.671 real 0m3.273s 00:15:49.671 user 0m2.616s 00:15:49.671 sys 0m0.790s 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:49.671 ************************************ 00:15:49.671 START TEST nvmf_nsid 00:15:49.671 ************************************ 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:15:49.671 * Looking for test storage... 00:15:49.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.671 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:49.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.672 --rc genhtml_branch_coverage=1 00:15:49.672 --rc genhtml_function_coverage=1 00:15:49.672 --rc genhtml_legend=1 00:15:49.672 --rc geninfo_all_blocks=1 00:15:49.672 --rc geninfo_unexecuted_blocks=1 00:15:49.672 00:15:49.672 ' 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:49.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.672 --rc genhtml_branch_coverage=1 00:15:49.672 --rc genhtml_function_coverage=1 00:15:49.672 --rc genhtml_legend=1 00:15:49.672 --rc geninfo_all_blocks=1 00:15:49.672 --rc geninfo_unexecuted_blocks=1 00:15:49.672 00:15:49.672 ' 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:49.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.672 --rc genhtml_branch_coverage=1 00:15:49.672 --rc genhtml_function_coverage=1 00:15:49.672 --rc genhtml_legend=1 00:15:49.672 --rc geninfo_all_blocks=1 00:15:49.672 --rc geninfo_unexecuted_blocks=1 00:15:49.672 00:15:49.672 ' 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:49.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.672 --rc genhtml_branch_coverage=1 00:15:49.672 --rc genhtml_function_coverage=1 00:15:49.672 --rc genhtml_legend=1 00:15:49.672 --rc geninfo_all_blocks=1 00:15:49.672 --rc geninfo_unexecuted_blocks=1 00:15:49.672 00:15:49.672 ' 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.672 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:49.931 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.931 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:49.932 Cannot find device "nvmf_init_br" 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:49.932 Cannot find device "nvmf_init_br2" 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:49.932 Cannot find device "nvmf_tgt_br" 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.932 Cannot find device "nvmf_tgt_br2" 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:49.932 Cannot find device "nvmf_init_br" 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:49.932 Cannot find device "nvmf_init_br2" 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:49.932 Cannot find device "nvmf_tgt_br" 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:49.932 Cannot find device "nvmf_tgt_br2" 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:49.932 Cannot find device "nvmf_br" 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:49.932 Cannot find device "nvmf_init_if" 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:49.932 Cannot find device "nvmf_init_if2" 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.932 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:50.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:50.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:15:50.191 00:15:50.191 --- 10.0.0.3 ping statistics --- 00:15:50.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.191 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:50.191 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:50.191 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:50.191 00:15:50.191 --- 10.0.0.4 ping statistics --- 00:15:50.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.191 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:50.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:15:50.191 00:15:50.191 --- 10.0.0.1 ping statistics --- 00:15:50.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.191 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:50.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:15:50.191 00:15:50.191 --- 10.0.0.2 ping statistics --- 00:15:50.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.191 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=85335 00:15:50.191 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:15:50.192 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 85335 00:15:50.192 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 85335 ']' 00:15:50.192 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.192 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.192 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.192 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.192 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:50.192 [2024-12-05 12:19:21.054821] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:50.192 [2024-12-05 12:19:21.054906] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.449 [2024-12-05 12:19:21.201816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.450 [2024-12-05 12:19:21.244279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.450 [2024-12-05 12:19:21.244496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.450 [2024-12-05 12:19:21.244582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.450 [2024-12-05 12:19:21.244674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.450 [2024-12-05 12:19:21.244738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.450 [2024-12-05 12:19:21.245167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=85364 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:50.708 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=d6daa931-c8e0-4bec-b4e3-e4fb35714a56 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=bb28e254-fa77-42eb-a916-dd0398af3a6a 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e14df97c-7df4-4869-a485-41878211a626 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:50.709 null0 00:15:50.709 null1 00:15:50.709 null2 00:15:50.709 [2024-12-05 12:19:21.473778] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.709 [2024-12-05 12:19:21.493934] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:50.709 [2024-12-05 12:19:21.494014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85364 ] 00:15:50.709 [2024-12-05 12:19:21.497907] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 85364 /var/tmp/tgt2.sock 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 85364 ']' 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.709 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:50.967 [2024-12-05 12:19:21.647223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.967 [2024-12-05 12:19:21.703085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.224 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.224 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:51.224 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:15:51.789 [2024-12-05 12:19:22.360678] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.789 [2024-12-05 12:19:22.376746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:15:51.789 nvme0n1 nvme0n2 00:15:51.789 nvme1n1 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:15:51.789 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:15:52.722 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:52.722 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:52.722 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:52.722 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid d6daa931-c8e0-4bec-b4e3-e4fb35714a56 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d6daa931c8e04becb4e3e4fb35714a56 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D6DAA931C8E04BECB4E3E4FB35714A56 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ D6DAA931C8E04BECB4E3E4FB35714A56 == \D\6\D\A\A\9\3\1\C\8\E\0\4\B\E\C\B\4\E\3\E\4\F\B\3\5\7\1\4\A\5\6 ]] 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:52.980 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid bb28e254-fa77-42eb-a916-dd0398af3a6a 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bb28e254fa7742eba916dd0398af3a6a 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BB28E254FA7742EBA916DD0398AF3A6A 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ BB28E254FA7742EBA916DD0398AF3A6A == \B\B\2\8\E\2\5\4\F\A\7\7\4\2\E\B\A\9\1\6\D\D\0\3\9\8\A\F\3\A\6\A ]] 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e14df97c-7df4-4869-a485-41878211a626 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e14df97c7df44869a48541878211a626 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E14DF97C7DF44869A48541878211A626 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E14DF97C7DF44869A48541878211A626 == \E\1\4\D\F\9\7\C\7\D\F\4\4\8\6\9\A\4\8\5\4\1\8\7\8\2\1\1\A\6\2\6 ]] 00:15:52.981 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:15:53.243 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:15:53.243 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:15:53.243 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 85364 00:15:53.243 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 85364 ']' 00:15:53.243 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 85364 00:15:53.243 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:53.243 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.243 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85364 00:15:53.243 killing process with pid 85364 00:15:53.243 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:53.243 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:53.243 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85364' 00:15:53.243 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 85364 00:15:53.243 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 85364 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:53.853 rmmod nvme_tcp 00:15:53.853 rmmod nvme_fabrics 00:15:53.853 rmmod nvme_keyring 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 85335 ']' 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 85335 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 85335 ']' 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 85335 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85335 00:15:53.853 killing process with pid 85335 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85335' 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 85335 00:15:53.853 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 85335 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:54.111 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:54.369 12:19:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:54.369 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.369 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.369 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:54.369 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.369 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.369 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.369 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:15:54.369 ************************************ 00:15:54.369 END TEST nvmf_nsid 00:15:54.369 ************************************ 00:15:54.369 00:15:54.369 real 0m4.754s 00:15:54.369 user 0m7.355s 00:15:54.369 sys 0m1.328s 00:15:54.369 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.369 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:54.369 12:19:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:54.369 00:15:54.369 real 6m58.891s 00:15:54.369 user 16m54.213s 00:15:54.369 sys 1m25.459s 00:15:54.369 ************************************ 00:15:54.369 END TEST nvmf_target_extra 00:15:54.369 ************************************ 00:15:54.369 12:19:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.369 12:19:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.369 12:19:25 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:54.369 12:19:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.369 12:19:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.369 12:19:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:54.369 ************************************ 00:15:54.369 START TEST nvmf_host 00:15:54.369 ************************************ 00:15:54.369 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:54.626 * Looking for test storage... 00:15:54.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:54.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.626 --rc genhtml_branch_coverage=1 00:15:54.626 --rc genhtml_function_coverage=1 00:15:54.626 --rc genhtml_legend=1 00:15:54.626 --rc geninfo_all_blocks=1 00:15:54.626 --rc geninfo_unexecuted_blocks=1 00:15:54.626 00:15:54.626 ' 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:54.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.626 --rc genhtml_branch_coverage=1 00:15:54.626 --rc genhtml_function_coverage=1 00:15:54.626 --rc genhtml_legend=1 00:15:54.626 --rc geninfo_all_blocks=1 00:15:54.626 --rc geninfo_unexecuted_blocks=1 00:15:54.626 00:15:54.626 ' 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:54.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.626 --rc genhtml_branch_coverage=1 00:15:54.626 --rc genhtml_function_coverage=1 00:15:54.626 --rc genhtml_legend=1 00:15:54.626 --rc geninfo_all_blocks=1 00:15:54.626 --rc geninfo_unexecuted_blocks=1 00:15:54.626 00:15:54.626 ' 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:54.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.626 --rc genhtml_branch_coverage=1 00:15:54.626 --rc genhtml_function_coverage=1 00:15:54.626 --rc genhtml_legend=1 00:15:54.626 --rc geninfo_all_blocks=1 00:15:54.626 --rc geninfo_unexecuted_blocks=1 00:15:54.626 00:15:54.626 ' 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.626 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:54.626 ************************************ 00:15:54.626 START TEST nvmf_multicontroller 00:15:54.626 ************************************ 00:15:54.626 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:15:54.626 * Looking for test storage... 00:15:54.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:54.884 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:54.884 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:54.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.885 --rc genhtml_branch_coverage=1 00:15:54.885 --rc genhtml_function_coverage=1 00:15:54.885 --rc genhtml_legend=1 00:15:54.885 --rc geninfo_all_blocks=1 00:15:54.885 --rc geninfo_unexecuted_blocks=1 00:15:54.885 00:15:54.885 ' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:54.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.885 --rc genhtml_branch_coverage=1 00:15:54.885 --rc genhtml_function_coverage=1 00:15:54.885 --rc genhtml_legend=1 00:15:54.885 --rc geninfo_all_blocks=1 00:15:54.885 --rc geninfo_unexecuted_blocks=1 00:15:54.885 00:15:54.885 ' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:54.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.885 --rc genhtml_branch_coverage=1 00:15:54.885 --rc genhtml_function_coverage=1 00:15:54.885 --rc genhtml_legend=1 00:15:54.885 --rc geninfo_all_blocks=1 00:15:54.885 --rc geninfo_unexecuted_blocks=1 00:15:54.885 00:15:54.885 ' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:54.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.885 --rc genhtml_branch_coverage=1 00:15:54.885 --rc genhtml_function_coverage=1 00:15:54.885 --rc genhtml_legend=1 00:15:54.885 --rc geninfo_all_blocks=1 00:15:54.885 --rc geninfo_unexecuted_blocks=1 00:15:54.885 00:15:54.885 ' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.885 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:54.885 Cannot find device "nvmf_init_br" 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:54.885 Cannot find device "nvmf_init_br2" 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:54.885 Cannot find device "nvmf_tgt_br" 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.885 Cannot find device "nvmf_tgt_br2" 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:54.885 Cannot find device "nvmf_init_br" 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:54.885 Cannot find device "nvmf_init_br2" 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:54.885 Cannot find device "nvmf_tgt_br" 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:54.885 Cannot find device "nvmf_tgt_br2" 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:54.885 Cannot find device "nvmf_br" 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:15:54.885 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:55.143 Cannot find device "nvmf_init_if" 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:55.143 Cannot find device "nvmf_init_if2" 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.143 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:55.144 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:55.144 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:15:55.144 00:15:55.144 --- 10.0.0.3 ping statistics --- 00:15:55.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.144 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:55.144 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:55.144 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:15:55.144 00:15:55.144 --- 10.0.0.4 ping statistics --- 00:15:55.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.144 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:55.144 12:19:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:55.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:55.144 00:15:55.144 --- 10.0.0.1 ping statistics --- 00:15:55.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.144 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:55.144 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:55.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:15:55.402 00:15:55.402 --- 10.0.0.2 ping statistics --- 00:15:55.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.402 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:55.402 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.402 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:15:55.402 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:55.402 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.402 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:55.402 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:55.402 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=85742 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 85742 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 85742 ']' 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.403 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.403 [2024-12-05 12:19:26.106473] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:55.403 [2024-12-05 12:19:26.106569] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.403 [2024-12-05 12:19:26.261358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.662 [2024-12-05 12:19:26.332871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.662 [2024-12-05 12:19:26.332960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.662 [2024-12-05 12:19:26.332976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.662 [2024-12-05 12:19:26.332987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.662 [2024-12-05 12:19:26.332996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.662 [2024-12-05 12:19:26.334626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.662 [2024-12-05 12:19:26.334746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.662 [2024-12-05 12:19:26.334759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.662 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.662 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:15:55.662 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:55.662 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:55.662 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.921 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.921 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.922 [2024-12-05 12:19:26.557688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.922 Malloc0 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.922 [2024-12-05 12:19:26.623168] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.922 [2024-12-05 12:19:26.630963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.922 Malloc1 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:55.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85782 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85782 /var/tmp/bdevperf.sock 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 85782 ']' 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.922 12:19:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:56.491 NVMe0n1 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.491 1 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:56.491 2024/12/05 12:19:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:15:56.491 request: 00:15:56.491 { 00:15:56.491 "method": "bdev_nvme_attach_controller", 00:15:56.491 "params": { 00:15:56.491 "name": "NVMe0", 00:15:56.491 "trtype": "tcp", 00:15:56.491 "traddr": "10.0.0.3", 00:15:56.491 "adrfam": "ipv4", 00:15:56.491 "trsvcid": "4420", 00:15:56.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.491 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:15:56.491 "hostaddr": "10.0.0.1", 00:15:56.491 "prchk_reftag": false, 00:15:56.491 "prchk_guard": false, 00:15:56.491 "hdgst": false, 00:15:56.491 "ddgst": false, 00:15:56.491 "allow_unrecognized_csi": false 00:15:56.491 } 00:15:56.491 } 00:15:56.491 Got JSON-RPC error response 00:15:56.491 GoRPCClient: error on JSON-RPC call 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:56.491 2024/12/05 12:19:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:15:56.491 request: 00:15:56.491 { 00:15:56.491 "method": "bdev_nvme_attach_controller", 00:15:56.491 "params": { 00:15:56.491 "name": "NVMe0", 00:15:56.491 "trtype": "tcp", 00:15:56.491 "traddr": "10.0.0.3", 00:15:56.491 "adrfam": "ipv4", 00:15:56.491 "trsvcid": "4420", 00:15:56.491 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:56.491 "hostaddr": "10.0.0.1", 00:15:56.491 "prchk_reftag": false, 00:15:56.491 "prchk_guard": false, 00:15:56.491 "hdgst": false, 00:15:56.491 "ddgst": false, 00:15:56.491 "allow_unrecognized_csi": false 00:15:56.491 } 00:15:56.491 } 00:15:56.491 Got JSON-RPC error response 00:15:56.491 GoRPCClient: error on JSON-RPC call 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:56.491 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:56.492 2024/12/05 12:19:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:15:56.492 request: 00:15:56.492 { 00:15:56.492 "method": "bdev_nvme_attach_controller", 00:15:56.492 "params": { 00:15:56.492 "name": "NVMe0", 00:15:56.492 "trtype": "tcp", 00:15:56.492 "traddr": "10.0.0.3", 00:15:56.492 "adrfam": "ipv4", 00:15:56.492 "trsvcid": "4420", 00:15:56.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.492 "hostaddr": "10.0.0.1", 00:15:56.492 "prchk_reftag": false, 00:15:56.492 "prchk_guard": false, 00:15:56.492 "hdgst": false, 00:15:56.492 "ddgst": false, 00:15:56.492 "multipath": "disable", 00:15:56.492 "allow_unrecognized_csi": false 00:15:56.492 } 00:15:56.492 } 00:15:56.492 Got JSON-RPC error response 00:15:56.492 GoRPCClient: error on JSON-RPC call 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:56.492 2024/12/05 12:19:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:15:56.492 request: 00:15:56.492 { 00:15:56.492 "method": "bdev_nvme_attach_controller", 00:15:56.492 "params": { 00:15:56.492 "name": "NVMe0", 00:15:56.492 "trtype": "tcp", 00:15:56.492 "traddr": "10.0.0.3", 00:15:56.492 "adrfam": "ipv4", 00:15:56.492 "trsvcid": "4420", 00:15:56.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.492 "hostaddr": "10.0.0.1", 00:15:56.492 "prchk_reftag": false, 00:15:56.492 "prchk_guard": false, 00:15:56.492 "hdgst": false, 00:15:56.492 "ddgst": false, 00:15:56.492 "multipath": "failover", 00:15:56.492 "allow_unrecognized_csi": false 00:15:56.492 } 00:15:56.492 } 00:15:56.492 Got JSON-RPC error response 00:15:56.492 GoRPCClient: error on JSON-RPC call 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:56.492 NVMe0n1 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.492 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:56.750 00:15:56.750 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.750 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:56.750 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:15:56.750 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.750 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:56.750 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.750 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:15:56.750 12:19:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:58.127 { 00:15:58.127 "results": [ 00:15:58.127 { 00:15:58.127 "job": "NVMe0n1", 00:15:58.127 "core_mask": "0x1", 00:15:58.127 "workload": "write", 00:15:58.127 "status": "finished", 00:15:58.127 "queue_depth": 128, 00:15:58.127 "io_size": 4096, 00:15:58.127 "runtime": 1.005956, 00:15:58.127 "iops": 21870.73788515601, 00:15:58.127 "mibps": 85.43256986389066, 00:15:58.127 "io_failed": 0, 00:15:58.127 "io_timeout": 0, 00:15:58.127 "avg_latency_us": 5843.669542954659, 00:15:58.127 "min_latency_us": 2055.447272727273, 00:15:58.127 "max_latency_us": 11379.432727272728 00:15:58.127 } 00:15:58.127 ], 00:15:58.127 "core_count": 1 00:15:58.127 } 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:58.127 nvme1n1 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:58.127 nvme1n1 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.127 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 85782 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 85782 ']' 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 85782 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85782 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:58.128 killing process with pid 85782 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85782' 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 85782 00:15:58.128 12:19:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 85782 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:15:58.387 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:15:58.387 [2024-12-05 12:19:26.758422] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:58.387 [2024-12-05 12:19:26.758535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85782 ] 00:15:58.387 [2024-12-05 12:19:26.911652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.387 [2024-12-05 12:19:26.967207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.387 [2024-12-05 12:19:27.415426] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name bc0d144e-1675-46a4-8ecd-2b053cc239d1 already exists 00:15:58.387 [2024-12-05 12:19:27.415471] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:bc0d144e-1675-46a4-8ecd-2b053cc239d1 alias for bdev NVMe1n1 00:15:58.387 [2024-12-05 12:19:27.415503] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:15:58.387 Running I/O for 1 seconds... 00:15:58.387 21842.00 IOPS, 85.32 MiB/s 00:15:58.387 Latency(us) 00:15:58.387 [2024-12-05T12:19:29.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.387 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:15:58.387 NVMe0n1 : 1.01 21870.74 85.43 0.00 0.00 5843.67 2055.45 11379.43 00:15:58.387 [2024-12-05T12:19:29.256Z] =================================================================================================================== 00:15:58.387 [2024-12-05T12:19:29.256Z] Total : 21870.74 85.43 0.00 0.00 5843.67 2055.45 11379.43 00:15:58.387 Received shutdown signal, test time was about 1.000000 seconds 00:15:58.387 00:15:58.387 Latency(us) 00:15:58.387 [2024-12-05T12:19:29.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.387 [2024-12-05T12:19:29.256Z] =================================================================================================================== 00:15:58.387 [2024-12-05T12:19:29.256Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:58.387 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:58.387 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:58.387 rmmod nvme_tcp 00:15:58.387 rmmod nvme_fabrics 00:15:58.387 rmmod nvme_keyring 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 85742 ']' 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 85742 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 85742 ']' 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 85742 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85742 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85742' 00:15:58.647 killing process with pid 85742 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 85742 00:15:58.647 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 85742 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:58.906 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:15:59.165 ************************************ 00:15:59.165 END TEST nvmf_multicontroller 00:15:59.165 ************************************ 00:15:59.165 00:15:59.165 real 0m4.457s 00:15:59.165 user 0m12.552s 00:15:59.165 sys 0m1.247s 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:59.165 ************************************ 00:15:59.165 START TEST nvmf_aer 00:15:59.165 ************************************ 00:15:59.165 12:19:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:15:59.165 * Looking for test storage... 00:15:59.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:59.165 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:59.165 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:15:59.165 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:59.424 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:59.424 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:59.424 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:59.424 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:59.424 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:59.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.425 --rc genhtml_branch_coverage=1 00:15:59.425 --rc genhtml_function_coverage=1 00:15:59.425 --rc genhtml_legend=1 00:15:59.425 --rc geninfo_all_blocks=1 00:15:59.425 --rc geninfo_unexecuted_blocks=1 00:15:59.425 00:15:59.425 ' 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:59.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.425 --rc genhtml_branch_coverage=1 00:15:59.425 --rc genhtml_function_coverage=1 00:15:59.425 --rc genhtml_legend=1 00:15:59.425 --rc geninfo_all_blocks=1 00:15:59.425 --rc geninfo_unexecuted_blocks=1 00:15:59.425 00:15:59.425 ' 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:59.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.425 --rc genhtml_branch_coverage=1 00:15:59.425 --rc genhtml_function_coverage=1 00:15:59.425 --rc genhtml_legend=1 00:15:59.425 --rc geninfo_all_blocks=1 00:15:59.425 --rc geninfo_unexecuted_blocks=1 00:15:59.425 00:15:59.425 ' 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:59.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.425 --rc genhtml_branch_coverage=1 00:15:59.425 --rc genhtml_function_coverage=1 00:15:59.425 --rc genhtml_legend=1 00:15:59.425 --rc geninfo_all_blocks=1 00:15:59.425 --rc geninfo_unexecuted_blocks=1 00:15:59.425 00:15:59.425 ' 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:59.425 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:59.425 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:59.426 Cannot find device "nvmf_init_br" 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:59.426 Cannot find device "nvmf_init_br2" 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:59.426 Cannot find device "nvmf_tgt_br" 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:59.426 Cannot find device "nvmf_tgt_br2" 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:59.426 Cannot find device "nvmf_init_br" 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:59.426 Cannot find device "nvmf_init_br2" 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:59.426 Cannot find device "nvmf_tgt_br" 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:59.426 Cannot find device "nvmf_tgt_br2" 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:59.426 Cannot find device "nvmf_br" 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:15:59.426 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:59.685 Cannot find device "nvmf_init_if" 00:15:59.685 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:15:59.685 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:59.685 Cannot find device "nvmf_init_if2" 00:15:59.685 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:59.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:59.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:59.686 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:59.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:59.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.129 ms 00:15:59.945 00:15:59.945 --- 10.0.0.3 ping statistics --- 00:15:59.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.945 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:59.945 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:59.945 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:15:59.945 00:15:59.945 --- 10.0.0.4 ping statistics --- 00:15:59.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.945 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:59.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:59.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:59.945 00:15:59.945 --- 10.0.0.1 ping statistics --- 00:15:59.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.945 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:59.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:59.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:15:59.945 00:15:59.945 --- 10.0.0.2 ping statistics --- 00:15:59.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:59.945 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=86075 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 86075 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 86075 ']' 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.945 12:19:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.945 [2024-12-05 12:19:30.706442] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:15:59.945 [2024-12-05 12:19:30.706536] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.204 [2024-12-05 12:19:30.863901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:00.204 [2024-12-05 12:19:30.924500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.204 [2024-12-05 12:19:30.924765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.204 [2024-12-05 12:19:30.924925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.204 [2024-12-05 12:19:30.925070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.204 [2024-12-05 12:19:30.925122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.204 [2024-12-05 12:19:30.926526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.204 [2024-12-05 12:19:30.926606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.204 [2024-12-05 12:19:30.926733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:00.204 [2024-12-05 12:19:30.926741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.139 [2024-12-05 12:19:31.804479] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.139 Malloc0 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.139 [2024-12-05 12:19:31.869766] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.139 [ 00:16:01.139 { 00:16:01.139 "allow_any_host": true, 00:16:01.139 "hosts": [], 00:16:01.139 "listen_addresses": [], 00:16:01.139 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:01.139 "subtype": "Discovery" 00:16:01.139 }, 00:16:01.139 { 00:16:01.139 "allow_any_host": true, 00:16:01.139 "hosts": [], 00:16:01.139 "listen_addresses": [ 00:16:01.139 { 00:16:01.139 "adrfam": "IPv4", 00:16:01.139 "traddr": "10.0.0.3", 00:16:01.139 "trsvcid": "4420", 00:16:01.139 "trtype": "TCP" 00:16:01.139 } 00:16:01.139 ], 00:16:01.139 "max_cntlid": 65519, 00:16:01.139 "max_namespaces": 2, 00:16:01.139 "min_cntlid": 1, 00:16:01.139 "model_number": "SPDK bdev Controller", 00:16:01.139 "namespaces": [ 00:16:01.139 { 00:16:01.139 "bdev_name": "Malloc0", 00:16:01.139 "name": "Malloc0", 00:16:01.139 "nguid": "96B4A27B2B204C2FBF0845919DE89829", 00:16:01.139 "nsid": 1, 00:16:01.139 "uuid": "96b4a27b-2b20-4c2f-bf08-45919de89829" 00:16:01.139 } 00:16:01.139 ], 00:16:01.139 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.139 "serial_number": "SPDK00000000000001", 00:16:01.139 "subtype": "NVMe" 00:16:01.139 } 00:16:01.139 ] 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=86141 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:16:01.139 12:19:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.397 Malloc1 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.397 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.656 [ 00:16:01.656 { 00:16:01.656 "allow_any_host": true, 00:16:01.656 "hosts": [], 00:16:01.656 "listen_addresses": [], 00:16:01.656 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:01.656 "subtype": "Discovery" 00:16:01.656 }, 00:16:01.656 { 00:16:01.656 "allow_any_host": true, 00:16:01.656 "hosts": [], 00:16:01.656 "listen_addresses": [ 00:16:01.656 { 00:16:01.656 "adrfam": "IPv4", 00:16:01.656 "traddr": "10.0.0.3", 00:16:01.656 Asynchronous Event Request test 00:16:01.656 Attaching to 10.0.0.3 00:16:01.656 Attached to 10.0.0.3 00:16:01.656 Registering asynchronous event callbacks... 00:16:01.656 Starting namespace attribute notice tests for all controllers... 00:16:01.656 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:01.656 aer_cb - Changed Namespace 00:16:01.656 Cleaning up... 00:16:01.656 "trsvcid": "4420", 00:16:01.656 "trtype": "TCP" 00:16:01.656 } 00:16:01.656 ], 00:16:01.656 "max_cntlid": 65519, 00:16:01.656 "max_namespaces": 2, 00:16:01.656 "min_cntlid": 1, 00:16:01.656 "model_number": "SPDK bdev Controller", 00:16:01.656 "namespaces": [ 00:16:01.656 { 00:16:01.656 "bdev_name": "Malloc0", 00:16:01.656 "name": "Malloc0", 00:16:01.656 "nguid": "96B4A27B2B204C2FBF0845919DE89829", 00:16:01.656 "nsid": 1, 00:16:01.656 "uuid": "96b4a27b-2b20-4c2f-bf08-45919de89829" 00:16:01.656 }, 00:16:01.656 { 00:16:01.656 "bdev_name": "Malloc1", 00:16:01.656 "name": "Malloc1", 00:16:01.656 "nguid": "B258634D46AC4369A8F7E2885016F2BE", 00:16:01.656 "nsid": 2, 00:16:01.656 "uuid": "b258634d-46ac-4369-a8f7-e2885016f2be" 00:16:01.656 } 00:16:01.656 ], 00:16:01.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.656 "serial_number": "SPDK00000000000001", 00:16:01.656 "subtype": "NVMe" 00:16:01.656 } 00:16:01.656 ] 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 86141 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:01.656 rmmod nvme_tcp 00:16:01.656 rmmod nvme_fabrics 00:16:01.656 rmmod nvme_keyring 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 86075 ']' 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 86075 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 86075 ']' 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 86075 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86075 00:16:01.656 killing process with pid 86075 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86075' 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 86075 00:16:01.656 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 86075 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:01.914 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:16:02.172 00:16:02.172 real 0m3.038s 00:16:02.172 user 0m7.626s 00:16:02.172 sys 0m0.892s 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.172 ************************************ 00:16:02.172 END TEST nvmf_aer 00:16:02.172 12:19:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:02.172 ************************************ 00:16:02.172 12:19:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:02.172 12:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:02.172 12:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.172 12:19:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:02.172 ************************************ 00:16:02.172 START TEST nvmf_async_init 00:16:02.172 ************************************ 00:16:02.172 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:02.430 * Looking for test storage... 00:16:02.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.430 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:02.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.431 --rc genhtml_branch_coverage=1 00:16:02.431 --rc genhtml_function_coverage=1 00:16:02.431 --rc genhtml_legend=1 00:16:02.431 --rc geninfo_all_blocks=1 00:16:02.431 --rc geninfo_unexecuted_blocks=1 00:16:02.431 00:16:02.431 ' 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:02.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.431 --rc genhtml_branch_coverage=1 00:16:02.431 --rc genhtml_function_coverage=1 00:16:02.431 --rc genhtml_legend=1 00:16:02.431 --rc geninfo_all_blocks=1 00:16:02.431 --rc geninfo_unexecuted_blocks=1 00:16:02.431 00:16:02.431 ' 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:02.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.431 --rc genhtml_branch_coverage=1 00:16:02.431 --rc genhtml_function_coverage=1 00:16:02.431 --rc genhtml_legend=1 00:16:02.431 --rc geninfo_all_blocks=1 00:16:02.431 --rc geninfo_unexecuted_blocks=1 00:16:02.431 00:16:02.431 ' 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:02.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.431 --rc genhtml_branch_coverage=1 00:16:02.431 --rc genhtml_function_coverage=1 00:16:02.431 --rc genhtml_legend=1 00:16:02.431 --rc geninfo_all_blocks=1 00:16:02.431 --rc geninfo_unexecuted_blocks=1 00:16:02.431 00:16:02.431 ' 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.431 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4e3983984e514c2e973ce868129cc805 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:02.431 Cannot find device "nvmf_init_br" 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:02.431 Cannot find device "nvmf_init_br2" 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:16:02.431 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:02.689 Cannot find device "nvmf_tgt_br" 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.689 Cannot find device "nvmf_tgt_br2" 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:02.689 Cannot find device "nvmf_init_br" 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:02.689 Cannot find device "nvmf_init_br2" 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:02.689 Cannot find device "nvmf_tgt_br" 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:02.689 Cannot find device "nvmf_tgt_br2" 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:02.689 Cannot find device "nvmf_br" 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:02.689 Cannot find device "nvmf_init_if" 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:02.689 Cannot find device "nvmf_init_if2" 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:02.689 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:02.947 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:02.947 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:16:02.947 00:16:02.947 --- 10.0.0.3 ping statistics --- 00:16:02.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.947 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:02.947 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:02.947 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:16:02.947 00:16:02.947 --- 10.0.0.4 ping statistics --- 00:16:02.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.947 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:02.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:02.947 00:16:02.947 --- 10.0.0.1 ping statistics --- 00:16:02.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.947 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:02.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:16:02.947 00:16:02.947 --- 10.0.0.2 ping statistics --- 00:16:02.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.947 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=86367 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 86367 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 86367 ']' 00:16:02.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.947 12:19:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:02.947 [2024-12-05 12:19:33.778152] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:02.948 [2024-12-05 12:19:33.778462] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.205 [2024-12-05 12:19:33.927301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.206 [2024-12-05 12:19:33.969631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.206 [2024-12-05 12:19:33.969687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.206 [2024-12-05 12:19:33.969698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.206 [2024-12-05 12:19:33.969705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.206 [2024-12-05 12:19:33.969711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.206 [2024-12-05 12:19:33.970013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.464 [2024-12-05 12:19:34.147648] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.464 null0 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4e3983984e514c2e973ce868129cc805 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.464 [2024-12-05 12:19:34.191698] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.464 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.723 nvme0n1 00:16:03.723 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.723 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:03.723 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.723 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.723 [ 00:16:03.723 { 00:16:03.723 "aliases": [ 00:16:03.723 "4e398398-4e51-4c2e-973c-e868129cc805" 00:16:03.723 ], 00:16:03.723 "assigned_rate_limits": { 00:16:03.723 "r_mbytes_per_sec": 0, 00:16:03.723 "rw_ios_per_sec": 0, 00:16:03.723 "rw_mbytes_per_sec": 0, 00:16:03.723 "w_mbytes_per_sec": 0 00:16:03.723 }, 00:16:03.723 "block_size": 512, 00:16:03.723 "claimed": false, 00:16:03.723 "driver_specific": { 00:16:03.723 "mp_policy": "active_passive", 00:16:03.723 "nvme": [ 00:16:03.723 { 00:16:03.723 "ctrlr_data": { 00:16:03.723 "ana_reporting": false, 00:16:03.723 "cntlid": 1, 00:16:03.723 "firmware_revision": "25.01", 00:16:03.723 "model_number": "SPDK bdev Controller", 00:16:03.723 "multi_ctrlr": true, 00:16:03.723 "oacs": { 00:16:03.723 "firmware": 0, 00:16:03.723 "format": 0, 00:16:03.723 "ns_manage": 0, 00:16:03.723 "security": 0 00:16:03.723 }, 00:16:03.723 "serial_number": "00000000000000000000", 00:16:03.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:03.723 "vendor_id": "0x8086" 00:16:03.723 }, 00:16:03.723 "ns_data": { 00:16:03.723 "can_share": true, 00:16:03.723 "id": 1 00:16:03.723 }, 00:16:03.723 "trid": { 00:16:03.723 "adrfam": "IPv4", 00:16:03.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:03.723 "traddr": "10.0.0.3", 00:16:03.723 "trsvcid": "4420", 00:16:03.723 "trtype": "TCP" 00:16:03.723 }, 00:16:03.723 "vs": { 00:16:03.723 "nvme_version": "1.3" 00:16:03.723 } 00:16:03.723 } 00:16:03.723 ] 00:16:03.723 }, 00:16:03.723 "memory_domains": [ 00:16:03.723 { 00:16:03.723 "dma_device_id": "system", 00:16:03.723 "dma_device_type": 1 00:16:03.723 } 00:16:03.723 ], 00:16:03.723 "name": "nvme0n1", 00:16:03.723 "num_blocks": 2097152, 00:16:03.723 "numa_id": -1, 00:16:03.723 "product_name": "NVMe disk", 00:16:03.723 "supported_io_types": { 00:16:03.723 "abort": true, 00:16:03.723 "compare": true, 00:16:03.723 "compare_and_write": true, 00:16:03.723 "copy": true, 00:16:03.723 "flush": true, 00:16:03.723 "get_zone_info": false, 00:16:03.723 "nvme_admin": true, 00:16:03.723 "nvme_io": true, 00:16:03.723 "nvme_io_md": false, 00:16:03.723 "nvme_iov_md": false, 00:16:03.723 "read": true, 00:16:03.723 "reset": true, 00:16:03.723 "seek_data": false, 00:16:03.723 "seek_hole": false, 00:16:03.723 "unmap": false, 00:16:03.723 "write": true, 00:16:03.723 "write_zeroes": true, 00:16:03.723 "zcopy": false, 00:16:03.723 "zone_append": false, 00:16:03.723 "zone_management": false 00:16:03.723 }, 00:16:03.723 "uuid": "4e398398-4e51-4c2e-973c-e868129cc805", 00:16:03.723 "zoned": false 00:16:03.723 } 00:16:03.723 ] 00:16:03.723 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.723 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:03.723 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.723 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.724 [2024-12-05 12:19:34.469630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:03.724 [2024-12-05 12:19:34.469719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f1660 (9): Bad file descriptor 00:16:03.983 [2024-12-05 12:19:34.601398] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.983 [ 00:16:03.983 { 00:16:03.983 "aliases": [ 00:16:03.983 "4e398398-4e51-4c2e-973c-e868129cc805" 00:16:03.983 ], 00:16:03.983 "assigned_rate_limits": { 00:16:03.983 "r_mbytes_per_sec": 0, 00:16:03.983 "rw_ios_per_sec": 0, 00:16:03.983 "rw_mbytes_per_sec": 0, 00:16:03.983 "w_mbytes_per_sec": 0 00:16:03.983 }, 00:16:03.983 "block_size": 512, 00:16:03.983 "claimed": false, 00:16:03.983 "driver_specific": { 00:16:03.983 "mp_policy": "active_passive", 00:16:03.983 "nvme": [ 00:16:03.983 { 00:16:03.983 "ctrlr_data": { 00:16:03.983 "ana_reporting": false, 00:16:03.983 "cntlid": 2, 00:16:03.983 "firmware_revision": "25.01", 00:16:03.983 "model_number": "SPDK bdev Controller", 00:16:03.983 "multi_ctrlr": true, 00:16:03.983 "oacs": { 00:16:03.983 "firmware": 0, 00:16:03.983 "format": 0, 00:16:03.983 "ns_manage": 0, 00:16:03.983 "security": 0 00:16:03.983 }, 00:16:03.983 "serial_number": "00000000000000000000", 00:16:03.983 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:03.983 "vendor_id": "0x8086" 00:16:03.983 }, 00:16:03.983 "ns_data": { 00:16:03.983 "can_share": true, 00:16:03.983 "id": 1 00:16:03.983 }, 00:16:03.983 "trid": { 00:16:03.983 "adrfam": "IPv4", 00:16:03.983 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:03.983 "traddr": "10.0.0.3", 00:16:03.983 "trsvcid": "4420", 00:16:03.983 "trtype": "TCP" 00:16:03.983 }, 00:16:03.983 "vs": { 00:16:03.983 "nvme_version": "1.3" 00:16:03.983 } 00:16:03.983 } 00:16:03.983 ] 00:16:03.983 }, 00:16:03.983 "memory_domains": [ 00:16:03.983 { 00:16:03.983 "dma_device_id": "system", 00:16:03.983 "dma_device_type": 1 00:16:03.983 } 00:16:03.983 ], 00:16:03.983 "name": "nvme0n1", 00:16:03.983 "num_blocks": 2097152, 00:16:03.983 "numa_id": -1, 00:16:03.983 "product_name": "NVMe disk", 00:16:03.983 "supported_io_types": { 00:16:03.983 "abort": true, 00:16:03.983 "compare": true, 00:16:03.983 "compare_and_write": true, 00:16:03.983 "copy": true, 00:16:03.983 "flush": true, 00:16:03.983 "get_zone_info": false, 00:16:03.983 "nvme_admin": true, 00:16:03.983 "nvme_io": true, 00:16:03.983 "nvme_io_md": false, 00:16:03.983 "nvme_iov_md": false, 00:16:03.983 "read": true, 00:16:03.983 "reset": true, 00:16:03.983 "seek_data": false, 00:16:03.983 "seek_hole": false, 00:16:03.983 "unmap": false, 00:16:03.983 "write": true, 00:16:03.983 "write_zeroes": true, 00:16:03.983 "zcopy": false, 00:16:03.983 "zone_append": false, 00:16:03.983 "zone_management": false 00:16:03.983 }, 00:16:03.983 "uuid": "4e398398-4e51-4c2e-973c-e868129cc805", 00:16:03.983 "zoned": false 00:16:03.983 } 00:16:03.983 ] 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.QioQZNbft5 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.QioQZNbft5 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.QioQZNbft5 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.983 [2024-12-05 12:19:34.681796] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:03.983 [2024-12-05 12:19:34.682115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.983 [2024-12-05 12:19:34.697794] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:03.983 nvme0n1 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.983 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.983 [ 00:16:03.983 { 00:16:03.983 "aliases": [ 00:16:03.983 "4e398398-4e51-4c2e-973c-e868129cc805" 00:16:03.983 ], 00:16:03.983 "assigned_rate_limits": { 00:16:03.983 "r_mbytes_per_sec": 0, 00:16:03.983 "rw_ios_per_sec": 0, 00:16:03.983 "rw_mbytes_per_sec": 0, 00:16:03.983 "w_mbytes_per_sec": 0 00:16:03.983 }, 00:16:03.983 "block_size": 512, 00:16:03.983 "claimed": false, 00:16:03.983 "driver_specific": { 00:16:03.983 "mp_policy": "active_passive", 00:16:03.983 "nvme": [ 00:16:03.983 { 00:16:03.983 "ctrlr_data": { 00:16:03.983 "ana_reporting": false, 00:16:03.983 "cntlid": 3, 00:16:03.983 "firmware_revision": "25.01", 00:16:03.983 "model_number": "SPDK bdev Controller", 00:16:03.983 "multi_ctrlr": true, 00:16:03.983 "oacs": { 00:16:03.983 "firmware": 0, 00:16:03.983 "format": 0, 00:16:03.983 "ns_manage": 0, 00:16:03.983 "security": 0 00:16:03.983 }, 00:16:03.984 "serial_number": "00000000000000000000", 00:16:03.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:03.984 "vendor_id": "0x8086" 00:16:03.984 }, 00:16:03.984 "ns_data": { 00:16:03.984 "can_share": true, 00:16:03.984 "id": 1 00:16:03.984 }, 00:16:03.984 "trid": { 00:16:03.984 "adrfam": "IPv4", 00:16:03.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:03.984 "traddr": "10.0.0.3", 00:16:03.984 "trsvcid": "4421", 00:16:03.984 "trtype": "TCP" 00:16:03.984 }, 00:16:03.984 "vs": { 00:16:03.984 "nvme_version": "1.3" 00:16:03.984 } 00:16:03.984 } 00:16:03.984 ] 00:16:03.984 }, 00:16:03.984 "memory_domains": [ 00:16:03.984 { 00:16:03.984 "dma_device_id": "system", 00:16:03.984 "dma_device_type": 1 00:16:03.984 } 00:16:03.984 ], 00:16:03.984 "name": "nvme0n1", 00:16:03.984 "num_blocks": 2097152, 00:16:03.984 "numa_id": -1, 00:16:03.984 "product_name": "NVMe disk", 00:16:03.984 "supported_io_types": { 00:16:03.984 "abort": true, 00:16:03.984 "compare": true, 00:16:03.984 "compare_and_write": true, 00:16:03.984 "copy": true, 00:16:03.984 "flush": true, 00:16:03.984 "get_zone_info": false, 00:16:03.984 "nvme_admin": true, 00:16:03.984 "nvme_io": true, 00:16:03.984 "nvme_io_md": false, 00:16:03.984 "nvme_iov_md": false, 00:16:03.984 "read": true, 00:16:03.984 "reset": true, 00:16:03.984 "seek_data": false, 00:16:03.984 "seek_hole": false, 00:16:03.984 "unmap": false, 00:16:03.984 "write": true, 00:16:03.984 "write_zeroes": true, 00:16:03.984 "zcopy": false, 00:16:03.984 "zone_append": false, 00:16:03.984 "zone_management": false 00:16:03.984 }, 00:16:03.984 "uuid": "4e398398-4e51-4c2e-973c-e868129cc805", 00:16:03.984 "zoned": false 00:16:03.984 } 00:16:03.984 ] 00:16:03.984 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.984 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:03.984 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.984 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:03.984 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.984 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.QioQZNbft5 00:16:03.984 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:16:03.984 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:16:03.984 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:03.984 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:04.243 rmmod nvme_tcp 00:16:04.243 rmmod nvme_fabrics 00:16:04.243 rmmod nvme_keyring 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 86367 ']' 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 86367 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 86367 ']' 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 86367 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86367 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:04.243 killing process with pid 86367 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86367' 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 86367 00:16:04.243 12:19:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 86367 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.501 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:16:04.759 00:16:04.759 real 0m2.374s 00:16:04.759 user 0m1.765s 00:16:04.759 sys 0m0.711s 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.759 ************************************ 00:16:04.759 END TEST nvmf_async_init 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:04.759 ************************************ 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.759 ************************************ 00:16:04.759 START TEST dma 00:16:04.759 ************************************ 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:04.759 * Looking for test storage... 00:16:04.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:04.759 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:16:05.016 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:05.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.017 --rc genhtml_branch_coverage=1 00:16:05.017 --rc genhtml_function_coverage=1 00:16:05.017 --rc genhtml_legend=1 00:16:05.017 --rc geninfo_all_blocks=1 00:16:05.017 --rc geninfo_unexecuted_blocks=1 00:16:05.017 00:16:05.017 ' 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:05.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.017 --rc genhtml_branch_coverage=1 00:16:05.017 --rc genhtml_function_coverage=1 00:16:05.017 --rc genhtml_legend=1 00:16:05.017 --rc geninfo_all_blocks=1 00:16:05.017 --rc geninfo_unexecuted_blocks=1 00:16:05.017 00:16:05.017 ' 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:05.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.017 --rc genhtml_branch_coverage=1 00:16:05.017 --rc genhtml_function_coverage=1 00:16:05.017 --rc genhtml_legend=1 00:16:05.017 --rc geninfo_all_blocks=1 00:16:05.017 --rc geninfo_unexecuted_blocks=1 00:16:05.017 00:16:05.017 ' 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:05.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.017 --rc genhtml_branch_coverage=1 00:16:05.017 --rc genhtml_function_coverage=1 00:16:05.017 --rc genhtml_legend=1 00:16:05.017 --rc geninfo_all_blocks=1 00:16:05.017 --rc geninfo_unexecuted_blocks=1 00:16:05.017 00:16:05.017 ' 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:05.017 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:16:05.017 00:16:05.017 real 0m0.214s 00:16:05.017 user 0m0.132s 00:16:05.017 sys 0m0.094s 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.017 ************************************ 00:16:05.017 END TEST dma 00:16:05.017 ************************************ 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.017 ************************************ 00:16:05.017 START TEST nvmf_identify 00:16:05.017 ************************************ 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:05.017 * Looking for test storage... 00:16:05.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:16:05.017 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:05.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.276 --rc genhtml_branch_coverage=1 00:16:05.276 --rc genhtml_function_coverage=1 00:16:05.276 --rc genhtml_legend=1 00:16:05.276 --rc geninfo_all_blocks=1 00:16:05.276 --rc geninfo_unexecuted_blocks=1 00:16:05.276 00:16:05.276 ' 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:05.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.276 --rc genhtml_branch_coverage=1 00:16:05.276 --rc genhtml_function_coverage=1 00:16:05.276 --rc genhtml_legend=1 00:16:05.276 --rc geninfo_all_blocks=1 00:16:05.276 --rc geninfo_unexecuted_blocks=1 00:16:05.276 00:16:05.276 ' 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:05.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.276 --rc genhtml_branch_coverage=1 00:16:05.276 --rc genhtml_function_coverage=1 00:16:05.276 --rc genhtml_legend=1 00:16:05.276 --rc geninfo_all_blocks=1 00:16:05.276 --rc geninfo_unexecuted_blocks=1 00:16:05.276 00:16:05.276 ' 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:05.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.276 --rc genhtml_branch_coverage=1 00:16:05.276 --rc genhtml_function_coverage=1 00:16:05.276 --rc genhtml_legend=1 00:16:05.276 --rc geninfo_all_blocks=1 00:16:05.276 --rc geninfo_unexecuted_blocks=1 00:16:05.276 00:16:05.276 ' 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.276 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:05.277 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:05.277 Cannot find device "nvmf_init_br" 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:05.277 Cannot find device "nvmf_init_br2" 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:05.277 Cannot find device "nvmf_tgt_br" 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.277 Cannot find device "nvmf_tgt_br2" 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:05.277 Cannot find device "nvmf_init_br" 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:16:05.277 12:19:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:05.277 Cannot find device "nvmf_init_br2" 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:05.277 Cannot find device "nvmf_tgt_br" 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:05.277 Cannot find device "nvmf_tgt_br2" 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:05.277 Cannot find device "nvmf_br" 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:05.277 Cannot find device "nvmf_init_if" 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:05.277 Cannot find device "nvmf_init_if2" 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:05.277 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:05.535 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:05.535 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:05.535 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:05.536 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:05.536 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:16:05.536 00:16:05.536 --- 10.0.0.3 ping statistics --- 00:16:05.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.536 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:05.536 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:05.536 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:16:05.536 00:16:05.536 --- 10.0.0.4 ping statistics --- 00:16:05.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.536 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:05.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:16:05.536 00:16:05.536 --- 10.0.0.1 ping statistics --- 00:16:05.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.536 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:05.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:05.536 00:16:05.536 --- 10.0.0.2 ping statistics --- 00:16:05.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.536 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86678 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86678 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 86678 ']' 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.536 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:05.794 [2024-12-05 12:19:36.455840] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:05.794 [2024-12-05 12:19:36.455932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.794 [2024-12-05 12:19:36.610826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:06.053 [2024-12-05 12:19:36.672002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.053 [2024-12-05 12:19:36.672070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.053 [2024-12-05 12:19:36.672086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.053 [2024-12-05 12:19:36.672097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.053 [2024-12-05 12:19:36.672107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.053 [2024-12-05 12:19:36.673489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.053 [2024-12-05 12:19:36.673554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.053 [2024-12-05 12:19:36.673690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.053 [2024-12-05 12:19:36.673702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.053 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.053 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:16:06.053 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:06.053 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.053 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:06.053 [2024-12-05 12:19:36.832685] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.053 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.053 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:06.053 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:06.053 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:06.053 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:06.053 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.053 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:06.053 Malloc0 00:16:06.054 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.054 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:06.054 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.054 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:06.313 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:06.314 [2024-12-05 12:19:36.938162] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:06.314 [ 00:16:06.314 { 00:16:06.314 "allow_any_host": true, 00:16:06.314 "hosts": [], 00:16:06.314 "listen_addresses": [ 00:16:06.314 { 00:16:06.314 "adrfam": "IPv4", 00:16:06.314 "traddr": "10.0.0.3", 00:16:06.314 "trsvcid": "4420", 00:16:06.314 "trtype": "TCP" 00:16:06.314 } 00:16:06.314 ], 00:16:06.314 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:06.314 "subtype": "Discovery" 00:16:06.314 }, 00:16:06.314 { 00:16:06.314 "allow_any_host": true, 00:16:06.314 "hosts": [], 00:16:06.314 "listen_addresses": [ 00:16:06.314 { 00:16:06.314 "adrfam": "IPv4", 00:16:06.314 "traddr": "10.0.0.3", 00:16:06.314 "trsvcid": "4420", 00:16:06.314 "trtype": "TCP" 00:16:06.314 } 00:16:06.314 ], 00:16:06.314 "max_cntlid": 65519, 00:16:06.314 "max_namespaces": 32, 00:16:06.314 "min_cntlid": 1, 00:16:06.314 "model_number": "SPDK bdev Controller", 00:16:06.314 "namespaces": [ 00:16:06.314 { 00:16:06.314 "bdev_name": "Malloc0", 00:16:06.314 "eui64": "ABCDEF0123456789", 00:16:06.314 "name": "Malloc0", 00:16:06.314 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:06.314 "nsid": 1, 00:16:06.314 "uuid": "89b40487-51b1-4ccb-b759-787e19c41715" 00:16:06.314 } 00:16:06.314 ], 00:16:06.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:06.314 "serial_number": "SPDK00000000000001", 00:16:06.314 "subtype": "NVMe" 00:16:06.314 } 00:16:06.314 ] 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.314 12:19:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:06.314 [2024-12-05 12:19:36.993766] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:06.314 [2024-12-05 12:19:36.993834] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86723 ] 00:16:06.314 [2024-12-05 12:19:37.143899] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:16:06.314 [2024-12-05 12:19:37.143972] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:06.314 [2024-12-05 12:19:37.143979] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:06.314 [2024-12-05 12:19:37.143991] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:06.314 [2024-12-05 12:19:37.144001] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:06.314 [2024-12-05 12:19:37.144320] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:16:06.314 [2024-12-05 12:19:37.144395] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7c2d90 0 00:16:06.314 [2024-12-05 12:19:37.149334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:06.314 [2024-12-05 12:19:37.149375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:06.314 [2024-12-05 12:19:37.149381] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:06.314 [2024-12-05 12:19:37.149384] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:06.314 [2024-12-05 12:19:37.149419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.149427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.149431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c2d90) 00:16:06.314 [2024-12-05 12:19:37.149443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:06.314 [2024-12-05 12:19:37.149475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803600, cid 0, qid 0 00:16:06.314 [2024-12-05 12:19:37.157313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.314 [2024-12-05 12:19:37.157358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.314 [2024-12-05 12:19:37.157378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.157382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803600) on tqpair=0x7c2d90 00:16:06.314 [2024-12-05 12:19:37.157396] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:06.314 [2024-12-05 12:19:37.157404] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:16:06.314 [2024-12-05 12:19:37.157410] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:16:06.314 [2024-12-05 12:19:37.157428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.157434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.157437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c2d90) 00:16:06.314 [2024-12-05 12:19:37.157446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.314 [2024-12-05 12:19:37.157476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803600, cid 0, qid 0 00:16:06.314 [2024-12-05 12:19:37.157545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.314 [2024-12-05 12:19:37.157552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.314 [2024-12-05 12:19:37.157555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.157559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803600) on tqpair=0x7c2d90 00:16:06.314 [2024-12-05 12:19:37.157564] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:16:06.314 [2024-12-05 12:19:37.157571] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:16:06.314 [2024-12-05 12:19:37.157578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.157582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.157586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c2d90) 00:16:06.314 [2024-12-05 12:19:37.157593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.314 [2024-12-05 12:19:37.157643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803600, cid 0, qid 0 00:16:06.314 [2024-12-05 12:19:37.157695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.314 [2024-12-05 12:19:37.157701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.314 [2024-12-05 12:19:37.157705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.157709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803600) on tqpair=0x7c2d90 00:16:06.314 [2024-12-05 12:19:37.157714] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:16:06.314 [2024-12-05 12:19:37.157722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:06.314 [2024-12-05 12:19:37.157729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.157734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.157737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c2d90) 00:16:06.314 [2024-12-05 12:19:37.157744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.314 [2024-12-05 12:19:37.157763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803600, cid 0, qid 0 00:16:06.314 [2024-12-05 12:19:37.157819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.314 [2024-12-05 12:19:37.157826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.314 [2024-12-05 12:19:37.157829] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.157833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803600) on tqpair=0x7c2d90 00:16:06.314 [2024-12-05 12:19:37.157839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:06.314 [2024-12-05 12:19:37.157848] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.157853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.314 [2024-12-05 12:19:37.157856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c2d90) 00:16:06.314 [2024-12-05 12:19:37.157863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.314 [2024-12-05 12:19:37.157881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803600, cid 0, qid 0 00:16:06.314 [2024-12-05 12:19:37.157941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.314 [2024-12-05 12:19:37.157948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.314 [2024-12-05 12:19:37.157952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.157956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803600) on tqpair=0x7c2d90 00:16:06.315 [2024-12-05 12:19:37.157961] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:06.315 [2024-12-05 12:19:37.157966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:06.315 [2024-12-05 12:19:37.157974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:06.315 [2024-12-05 12:19:37.158084] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:16:06.315 [2024-12-05 12:19:37.158100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:06.315 [2024-12-05 12:19:37.158110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c2d90) 00:16:06.315 [2024-12-05 12:19:37.158125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.315 [2024-12-05 12:19:37.158146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803600, cid 0, qid 0 00:16:06.315 [2024-12-05 12:19:37.158217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.315 [2024-12-05 12:19:37.158224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.315 [2024-12-05 12:19:37.158227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803600) on tqpair=0x7c2d90 00:16:06.315 [2024-12-05 12:19:37.158236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:06.315 [2024-12-05 12:19:37.158246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c2d90) 00:16:06.315 [2024-12-05 12:19:37.158261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.315 [2024-12-05 12:19:37.158291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803600, cid 0, qid 0 00:16:06.315 [2024-12-05 12:19:37.158347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.315 [2024-12-05 12:19:37.158354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.315 [2024-12-05 12:19:37.158357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803600) on tqpair=0x7c2d90 00:16:06.315 [2024-12-05 12:19:37.158380] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:06.315 [2024-12-05 12:19:37.158386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:06.315 [2024-12-05 12:19:37.158393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:16:06.315 [2024-12-05 12:19:37.158403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:06.315 [2024-12-05 12:19:37.158414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c2d90) 00:16:06.315 [2024-12-05 12:19:37.158425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.315 [2024-12-05 12:19:37.158446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803600, cid 0, qid 0 00:16:06.315 [2024-12-05 12:19:37.158558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.315 [2024-12-05 12:19:37.158564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.315 [2024-12-05 12:19:37.158568] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158572] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7c2d90): datao=0, datal=4096, cccid=0 00:16:06.315 [2024-12-05 12:19:37.158576] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x803600) on tqpair(0x7c2d90): expected_datao=0, payload_size=4096 00:16:06.315 [2024-12-05 12:19:37.158581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158588] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158592] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.315 [2024-12-05 12:19:37.158607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.315 [2024-12-05 12:19:37.158610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803600) on tqpair=0x7c2d90 00:16:06.315 [2024-12-05 12:19:37.158623] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:16:06.315 [2024-12-05 12:19:37.158628] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:16:06.315 [2024-12-05 12:19:37.158633] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:16:06.315 [2024-12-05 12:19:37.158643] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:16:06.315 [2024-12-05 12:19:37.158647] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:16:06.315 [2024-12-05 12:19:37.158652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:16:06.315 [2024-12-05 12:19:37.158661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:06.315 [2024-12-05 12:19:37.158680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c2d90) 00:16:06.315 [2024-12-05 12:19:37.158695] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:06.315 [2024-12-05 12:19:37.158716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803600, cid 0, qid 0 00:16:06.315 [2024-12-05 12:19:37.158779] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.315 [2024-12-05 12:19:37.158786] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.315 [2024-12-05 12:19:37.158789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803600) on tqpair=0x7c2d90 00:16:06.315 [2024-12-05 12:19:37.158801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7c2d90) 00:16:06.315 [2024-12-05 12:19:37.158815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.315 [2024-12-05 12:19:37.158820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7c2d90) 00:16:06.315 [2024-12-05 12:19:37.158833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.315 [2024-12-05 12:19:37.158839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7c2d90) 00:16:06.315 [2024-12-05 12:19:37.158851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.315 [2024-12-05 12:19:37.158857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.315 [2024-12-05 12:19:37.158869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.315 [2024-12-05 12:19:37.158874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:06.315 [2024-12-05 12:19:37.158882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:06.315 [2024-12-05 12:19:37.158889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.158893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7c2d90) 00:16:06.315 [2024-12-05 12:19:37.158899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.315 [2024-12-05 12:19:37.158925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803600, cid 0, qid 0 00:16:06.315 [2024-12-05 12:19:37.158932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803780, cid 1, qid 0 00:16:06.315 [2024-12-05 12:19:37.158937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803900, cid 2, qid 0 00:16:06.315 [2024-12-05 12:19:37.158941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.315 [2024-12-05 12:19:37.158946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803c00, cid 4, qid 0 00:16:06.315 [2024-12-05 12:19:37.159066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.315 [2024-12-05 12:19:37.159073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.315 [2024-12-05 12:19:37.159076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.159080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803c00) on tqpair=0x7c2d90 00:16:06.315 [2024-12-05 12:19:37.159086] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:16:06.315 [2024-12-05 12:19:37.159091] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:16:06.315 [2024-12-05 12:19:37.159102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.315 [2024-12-05 12:19:37.159106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7c2d90) 00:16:06.315 [2024-12-05 12:19:37.159113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.316 [2024-12-05 12:19:37.159133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803c00, cid 4, qid 0 00:16:06.316 [2024-12-05 12:19:37.159209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.316 [2024-12-05 12:19:37.159216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.316 [2024-12-05 12:19:37.159219] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159222] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7c2d90): datao=0, datal=4096, cccid=4 00:16:06.316 [2024-12-05 12:19:37.159227] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x803c00) on tqpair(0x7c2d90): expected_datao=0, payload_size=4096 00:16:06.316 [2024-12-05 12:19:37.159232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159238] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159242] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.316 [2024-12-05 12:19:37.159256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.316 [2024-12-05 12:19:37.159259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803c00) on tqpair=0x7c2d90 00:16:06.316 [2024-12-05 12:19:37.159275] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:16:06.316 [2024-12-05 12:19:37.159315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7c2d90) 00:16:06.316 [2024-12-05 12:19:37.159329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.316 [2024-12-05 12:19:37.159337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7c2d90) 00:16:06.316 [2024-12-05 12:19:37.159350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.316 [2024-12-05 12:19:37.159388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803c00, cid 4, qid 0 00:16:06.316 [2024-12-05 12:19:37.159395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803d80, cid 5, qid 0 00:16:06.316 [2024-12-05 12:19:37.159515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.316 [2024-12-05 12:19:37.159531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.316 [2024-12-05 12:19:37.159535] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159539] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7c2d90): datao=0, datal=1024, cccid=4 00:16:06.316 [2024-12-05 12:19:37.159543] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x803c00) on tqpair(0x7c2d90): expected_datao=0, payload_size=1024 00:16:06.316 [2024-12-05 12:19:37.159547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159554] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159557] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.316 [2024-12-05 12:19:37.159569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.316 [2024-12-05 12:19:37.159572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.316 [2024-12-05 12:19:37.159576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803d80) on tqpair=0x7c2d90 00:16:06.579 [2024-12-05 12:19:37.202220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.579 [2024-12-05 12:19:37.202240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.579 [2024-12-05 12:19:37.202259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.579 [2024-12-05 12:19:37.202263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803c00) on tqpair=0x7c2d90 00:16:06.579 [2024-12-05 12:19:37.202277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.579 [2024-12-05 12:19:37.202282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7c2d90) 00:16:06.579 [2024-12-05 12:19:37.202290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.579 [2024-12-05 12:19:37.202329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803c00, cid 4, qid 0 00:16:06.579 [2024-12-05 12:19:37.202405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.579 [2024-12-05 12:19:37.202411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.579 [2024-12-05 12:19:37.202415] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.579 [2024-12-05 12:19:37.202418] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7c2d90): datao=0, datal=3072, cccid=4 00:16:06.579 [2024-12-05 12:19:37.202422] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x803c00) on tqpair(0x7c2d90): expected_datao=0, payload_size=3072 00:16:06.579 [2024-12-05 12:19:37.202426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.579 [2024-12-05 12:19:37.202433] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.579 [2024-12-05 12:19:37.202436] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.579 [2024-12-05 12:19:37.202444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.579 [2024-12-05 12:19:37.202449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.579 [2024-12-05 12:19:37.202452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.579 [2024-12-05 12:19:37.202456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803c00) on tqpair=0x7c2d90 00:16:06.579 [2024-12-05 12:19:37.202466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.579 [2024-12-05 12:19:37.202471] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7c2d90) 00:16:06.579 [2024-12-05 12:19:37.202493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.579 [2024-12-05 12:19:37.202531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803c00, cid 4, qid 0 00:16:06.579 [2024-12-05 12:19:37.202605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.579 [2024-12-05 12:19:37.202611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.579 [2024-12-05 12:19:37.202615] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.579 [2024-12-05 12:19:37.202618] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7c2d90): datao=0, datal=8, cccid=4 00:16:06.579 [2024-12-05 12:19:37.202622] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x803c00) on tqpair(0x7c2d90): expected_datao=0, payload_size=8 00:16:06.580 [2024-12-05 12:19:37.202626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.580 [2024-12-05 12:19:37.202632] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.580 [2024-12-05 12:19:37.202636] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.580 ===================================================== 00:16:06.580 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:06.580 ===================================================== 00:16:06.580 Controller Capabilities/Features 00:16:06.580 ================================ 00:16:06.580 Vendor ID: 0000 00:16:06.580 Subsystem Vendor ID: 0000 00:16:06.580 Serial Number: .................... 00:16:06.580 Model Number: ........................................ 00:16:06.580 Firmware Version: 25.01 00:16:06.580 Recommended Arb Burst: 0 00:16:06.580 IEEE OUI Identifier: 00 00 00 00:16:06.580 Multi-path I/O 00:16:06.580 May have multiple subsystem ports: No 00:16:06.580 May have multiple controllers: No 00:16:06.580 Associated with SR-IOV VF: No 00:16:06.580 Max Data Transfer Size: 131072 00:16:06.580 Max Number of Namespaces: 0 00:16:06.580 Max Number of I/O Queues: 1024 00:16:06.580 NVMe Specification Version (VS): 1.3 00:16:06.580 NVMe Specification Version (Identify): 1.3 00:16:06.580 Maximum Queue Entries: 128 00:16:06.580 Contiguous Queues Required: Yes 00:16:06.580 Arbitration Mechanisms Supported 00:16:06.580 Weighted Round Robin: Not Supported 00:16:06.580 Vendor Specific: Not Supported 00:16:06.580 Reset Timeout: 15000 ms 00:16:06.580 Doorbell Stride: 4 bytes 00:16:06.580 NVM Subsystem Reset: Not Supported 00:16:06.580 Command Sets Supported 00:16:06.580 NVM Command Set: Supported 00:16:06.580 Boot Partition: Not Supported 00:16:06.580 Memory Page Size Minimum: 4096 bytes 00:16:06.580 Memory Page Size Maximum: 4096 bytes 00:16:06.580 Persistent Memory Region: Not Supported 00:16:06.580 Optional Asynchronous Events Supported 00:16:06.580 Namespace Attribute Notices: Not Supported 00:16:06.580 Firmware Activation Notices: Not Supported 00:16:06.580 ANA Change Notices: Not Supported 00:16:06.580 PLE Aggregate Log Change Notices: Not Supported 00:16:06.580 LBA Status Info Alert Notices: Not Supported 00:16:06.580 EGE Aggregate Log Change Notices: Not Supported 00:16:06.580 Normal NVM Subsystem Shutdown event: Not Supported 00:16:06.580 Zone Descriptor Change Notices: Not Supported 00:16:06.580 Discovery Log Change Notices: Supported 00:16:06.580 Controller Attributes 00:16:06.580 128-bit Host Identifier: Not Supported 00:16:06.580 Non-Operational Permissive Mode: Not Supported 00:16:06.580 NVM Sets: Not Supported 00:16:06.580 Read Recovery Levels: Not Supported 00:16:06.580 Endurance Groups: Not Supported 00:16:06.580 Predictable Latency Mode: Not Supported 00:16:06.580 Traffic Based Keep ALive: Not Supported 00:16:06.580 Namespace Granularity: Not Supported 00:16:06.580 SQ Associations: Not Supported 00:16:06.580 UUID List: Not Supported 00:16:06.580 Multi-Domain Subsystem: Not Supported 00:16:06.580 Fixed Capacity Management: Not Supported 00:16:06.580 Variable Capacity Management: Not Supported 00:16:06.580 Delete Endurance Group: Not Supported 00:16:06.580 Delete NVM Set: Not Supported 00:16:06.580 Extended LBA Formats Supported: Not Supported 00:16:06.580 Flexible Data Placement Supported: Not Supported 00:16:06.580 00:16:06.580 Controller Memory Buffer Support 00:16:06.580 ================================ 00:16:06.580 Supported: No 00:16:06.580 00:16:06.580 Persistent Memory Region Support 00:16:06.580 ================================ 00:16:06.580 Supported: No 00:16:06.580 00:16:06.580 Admin Command Set Attributes 00:16:06.580 ============================ 00:16:06.580 Security Send/Receive: Not Supported 00:16:06.580 Format NVM: Not Supported 00:16:06.580 Firmware Activate/Download: Not Supported 00:16:06.580 Namespace Management: Not Supported 00:16:06.580 Device Self-Test: Not Supported 00:16:06.580 Directives: Not Supported 00:16:06.580 NVMe-MI: Not Supported 00:16:06.580 Virtualization Management: Not Supported 00:16:06.580 Doorbell Buffer Config: Not Supported 00:16:06.580 Get LBA Status Capability: Not Supported 00:16:06.580 Command & Feature Lockdown Capability: Not Supported 00:16:06.580 Abort Command Limit: 1 00:16:06.580 Async Event Request Limit: 4 00:16:06.580 Number of Firmware Slots: N/A 00:16:06.580 Firmware Slot 1 Read-Only: N/A 00:16:06.580 Firm[2024-12-05 12:19:37.247308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.580 [2024-12-05 12:19:37.247329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.580 [2024-12-05 12:19:37.247348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.580 [2024-12-05 12:19:37.247352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803c00) on tqpair=0x7c2d90 00:16:06.580 ware Activation Without Reset: N/A 00:16:06.580 Multiple Update Detection Support: N/A 00:16:06.580 Firmware Update Granularity: No Information Provided 00:16:06.580 Per-Namespace SMART Log: No 00:16:06.580 Asymmetric Namespace Access Log Page: Not Supported 00:16:06.580 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:06.580 Command Effects Log Page: Not Supported 00:16:06.580 Get Log Page Extended Data: Supported 00:16:06.580 Telemetry Log Pages: Not Supported 00:16:06.580 Persistent Event Log Pages: Not Supported 00:16:06.580 Supported Log Pages Log Page: May Support 00:16:06.580 Commands Supported & Effects Log Page: Not Supported 00:16:06.580 Feature Identifiers & Effects Log Page:May Support 00:16:06.580 NVMe-MI Commands & Effects Log Page: May Support 00:16:06.580 Data Area 4 for Telemetry Log: Not Supported 00:16:06.580 Error Log Page Entries Supported: 128 00:16:06.580 Keep Alive: Not Supported 00:16:06.580 00:16:06.580 NVM Command Set Attributes 00:16:06.580 ========================== 00:16:06.580 Submission Queue Entry Size 00:16:06.580 Max: 1 00:16:06.580 Min: 1 00:16:06.580 Completion Queue Entry Size 00:16:06.580 Max: 1 00:16:06.580 Min: 1 00:16:06.580 Number of Namespaces: 0 00:16:06.580 Compare Command: Not Supported 00:16:06.580 Write Uncorrectable Command: Not Supported 00:16:06.580 Dataset Management Command: Not Supported 00:16:06.580 Write Zeroes Command: Not Supported 00:16:06.580 Set Features Save Field: Not Supported 00:16:06.580 Reservations: Not Supported 00:16:06.580 Timestamp: Not Supported 00:16:06.580 Copy: Not Supported 00:16:06.580 Volatile Write Cache: Not Present 00:16:06.580 Atomic Write Unit (Normal): 1 00:16:06.580 Atomic Write Unit (PFail): 1 00:16:06.580 Atomic Compare & Write Unit: 1 00:16:06.580 Fused Compare & Write: Supported 00:16:06.580 Scatter-Gather List 00:16:06.580 SGL Command Set: Supported 00:16:06.580 SGL Keyed: Supported 00:16:06.580 SGL Bit Bucket Descriptor: Not Supported 00:16:06.580 SGL Metadata Pointer: Not Supported 00:16:06.580 Oversized SGL: Not Supported 00:16:06.580 SGL Metadata Address: Not Supported 00:16:06.580 SGL Offset: Supported 00:16:06.580 Transport SGL Data Block: Not Supported 00:16:06.580 Replay Protected Memory Block: Not Supported 00:16:06.580 00:16:06.580 Firmware Slot Information 00:16:06.580 ========================= 00:16:06.580 Active slot: 0 00:16:06.580 00:16:06.580 00:16:06.580 Error Log 00:16:06.580 ========= 00:16:06.580 00:16:06.580 Active Namespaces 00:16:06.580 ================= 00:16:06.580 Discovery Log Page 00:16:06.580 ================== 00:16:06.580 Generation Counter: 2 00:16:06.580 Number of Records: 2 00:16:06.580 Record Format: 0 00:16:06.580 00:16:06.580 Discovery Log Entry 0 00:16:06.580 ---------------------- 00:16:06.580 Transport Type: 3 (TCP) 00:16:06.580 Address Family: 1 (IPv4) 00:16:06.580 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:06.580 Entry Flags: 00:16:06.580 Duplicate Returned Information: 1 00:16:06.580 Explicit Persistent Connection Support for Discovery: 1 00:16:06.580 Transport Requirements: 00:16:06.580 Secure Channel: Not Required 00:16:06.580 Port ID: 0 (0x0000) 00:16:06.580 Controller ID: 65535 (0xffff) 00:16:06.580 Admin Max SQ Size: 128 00:16:06.580 Transport Service Identifier: 4420 00:16:06.580 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:06.580 Transport Address: 10.0.0.3 00:16:06.580 Discovery Log Entry 1 00:16:06.580 ---------------------- 00:16:06.580 Transport Type: 3 (TCP) 00:16:06.580 Address Family: 1 (IPv4) 00:16:06.580 Subsystem Type: 2 (NVM Subsystem) 00:16:06.580 Entry Flags: 00:16:06.580 Duplicate Returned Information: 0 00:16:06.580 Explicit Persistent Connection Support for Discovery: 0 00:16:06.580 Transport Requirements: 00:16:06.580 Secure Channel: Not Required 00:16:06.580 Port ID: 0 (0x0000) 00:16:06.580 Controller ID: 65535 (0xffff) 00:16:06.580 Admin Max SQ Size: 128 00:16:06.581 Transport Service Identifier: 4420 00:16:06.581 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:06.581 Transport Address: 10.0.0.3 [2024-12-05 12:19:37.247481] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:16:06.581 [2024-12-05 12:19:37.247497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803600) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.247504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.581 [2024-12-05 12:19:37.247509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803780) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.247513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.581 [2024-12-05 12:19:37.247518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803900) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.247522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.581 [2024-12-05 12:19:37.247526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.247530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.581 [2024-12-05 12:19:37.247543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.247548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.247551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.581 [2024-12-05 12:19:37.247559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.581 [2024-12-05 12:19:37.247587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.581 [2024-12-05 12:19:37.247662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.581 [2024-12-05 12:19:37.247670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.581 [2024-12-05 12:19:37.247673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.247677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.247685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.247689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.247692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.581 [2024-12-05 12:19:37.247699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.581 [2024-12-05 12:19:37.247723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.581 [2024-12-05 12:19:37.247799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.581 [2024-12-05 12:19:37.247806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.581 [2024-12-05 12:19:37.247809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.247813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.247818] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:16:06.581 [2024-12-05 12:19:37.247822] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:16:06.581 [2024-12-05 12:19:37.247832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.247836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.247840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.581 [2024-12-05 12:19:37.247847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.581 [2024-12-05 12:19:37.247865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.581 [2024-12-05 12:19:37.247924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.581 [2024-12-05 12:19:37.247936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.581 [2024-12-05 12:19:37.247940] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.247943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.247954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.247959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.247962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.581 [2024-12-05 12:19:37.247969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.581 [2024-12-05 12:19:37.247998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.581 [2024-12-05 12:19:37.248058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.581 [2024-12-05 12:19:37.248064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.581 [2024-12-05 12:19:37.248068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.248081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.581 [2024-12-05 12:19:37.248095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.581 [2024-12-05 12:19:37.248114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.581 [2024-12-05 12:19:37.248174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.581 [2024-12-05 12:19:37.248180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.581 [2024-12-05 12:19:37.248183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.248212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.581 [2024-12-05 12:19:37.248226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.581 [2024-12-05 12:19:37.248243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.581 [2024-12-05 12:19:37.248323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.581 [2024-12-05 12:19:37.248332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.581 [2024-12-05 12:19:37.248335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.248349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.581 [2024-12-05 12:19:37.248364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.581 [2024-12-05 12:19:37.248400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.581 [2024-12-05 12:19:37.248455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.581 [2024-12-05 12:19:37.248461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.581 [2024-12-05 12:19:37.248464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.248478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.581 [2024-12-05 12:19:37.248493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.581 [2024-12-05 12:19:37.248512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.581 [2024-12-05 12:19:37.248574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.581 [2024-12-05 12:19:37.248581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.581 [2024-12-05 12:19:37.248584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.248598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.581 [2024-12-05 12:19:37.248613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.581 [2024-12-05 12:19:37.248631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.581 [2024-12-05 12:19:37.248702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.581 [2024-12-05 12:19:37.248709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.581 [2024-12-05 12:19:37.248712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.248725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.581 [2024-12-05 12:19:37.248740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.581 [2024-12-05 12:19:37.248758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.581 [2024-12-05 12:19:37.248816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.581 [2024-12-05 12:19:37.248822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.581 [2024-12-05 12:19:37.248826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.581 [2024-12-05 12:19:37.248839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.581 [2024-12-05 12:19:37.248844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.248847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.582 [2024-12-05 12:19:37.248854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.582 [2024-12-05 12:19:37.248872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.582 [2024-12-05 12:19:37.248927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.582 [2024-12-05 12:19:37.248934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.582 [2024-12-05 12:19:37.248937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.248941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.582 [2024-12-05 12:19:37.248951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.248955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.248959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.582 [2024-12-05 12:19:37.248966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.582 [2024-12-05 12:19:37.248983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.582 [2024-12-05 12:19:37.249046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.582 [2024-12-05 12:19:37.249053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.582 [2024-12-05 12:19:37.249056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.582 [2024-12-05 12:19:37.249069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249078] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.582 [2024-12-05 12:19:37.249084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.582 [2024-12-05 12:19:37.249103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.582 [2024-12-05 12:19:37.249161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.582 [2024-12-05 12:19:37.249167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.582 [2024-12-05 12:19:37.249170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.582 [2024-12-05 12:19:37.249183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.582 [2024-12-05 12:19:37.249198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.582 [2024-12-05 12:19:37.249216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.582 [2024-12-05 12:19:37.249274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.582 [2024-12-05 12:19:37.249281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.582 [2024-12-05 12:19:37.249284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.582 [2024-12-05 12:19:37.249308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.582 [2024-12-05 12:19:37.249324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.582 [2024-12-05 12:19:37.249345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.582 [2024-12-05 12:19:37.249410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.582 [2024-12-05 12:19:37.249416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.582 [2024-12-05 12:19:37.249420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.582 [2024-12-05 12:19:37.249433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.582 [2024-12-05 12:19:37.249448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.582 [2024-12-05 12:19:37.249467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.582 [2024-12-05 12:19:37.249530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.582 [2024-12-05 12:19:37.249536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.582 [2024-12-05 12:19:37.249539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.582 [2024-12-05 12:19:37.249553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.582 [2024-12-05 12:19:37.249568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.582 [2024-12-05 12:19:37.249586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.582 [2024-12-05 12:19:37.249643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.582 [2024-12-05 12:19:37.249650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.582 [2024-12-05 12:19:37.249653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.582 [2024-12-05 12:19:37.249666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.582 [2024-12-05 12:19:37.249681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.582 [2024-12-05 12:19:37.249699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.582 [2024-12-05 12:19:37.249760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.582 [2024-12-05 12:19:37.249766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.582 [2024-12-05 12:19:37.249770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.582 [2024-12-05 12:19:37.249783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.582 [2024-12-05 12:19:37.249798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.582 [2024-12-05 12:19:37.249816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.582 [2024-12-05 12:19:37.249896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.582 [2024-12-05 12:19:37.249903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.582 [2024-12-05 12:19:37.249906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.582 [2024-12-05 12:19:37.249919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.249927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.582 [2024-12-05 12:19:37.249934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.582 [2024-12-05 12:19:37.249952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.582 [2024-12-05 12:19:37.250011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.582 [2024-12-05 12:19:37.250017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.582 [2024-12-05 12:19:37.250020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.250024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.582 [2024-12-05 12:19:37.250034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.250038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.250042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.582 [2024-12-05 12:19:37.250048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.582 [2024-12-05 12:19:37.250066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.582 [2024-12-05 12:19:37.250124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.582 [2024-12-05 12:19:37.250131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.582 [2024-12-05 12:19:37.250134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.250138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.582 [2024-12-05 12:19:37.250147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.250152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.250156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.582 [2024-12-05 12:19:37.250162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.582 [2024-12-05 12:19:37.250181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.582 [2024-12-05 12:19:37.250234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.582 [2024-12-05 12:19:37.250240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.582 [2024-12-05 12:19:37.250243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.250247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.582 [2024-12-05 12:19:37.250256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.582 [2024-12-05 12:19:37.250261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.583 [2024-12-05 12:19:37.250271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.583 [2024-12-05 12:19:37.250300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.583 [2024-12-05 12:19:37.250382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.583 [2024-12-05 12:19:37.250389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.583 [2024-12-05 12:19:37.250392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.583 [2024-12-05 12:19:37.250406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250414] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.583 [2024-12-05 12:19:37.250421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.583 [2024-12-05 12:19:37.250440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.583 [2024-12-05 12:19:37.250512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.583 [2024-12-05 12:19:37.250523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.583 [2024-12-05 12:19:37.250526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.583 [2024-12-05 12:19:37.250541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.583 [2024-12-05 12:19:37.250556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.583 [2024-12-05 12:19:37.250575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.583 [2024-12-05 12:19:37.250640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.583 [2024-12-05 12:19:37.250646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.583 [2024-12-05 12:19:37.250650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.583 [2024-12-05 12:19:37.250663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.583 [2024-12-05 12:19:37.250678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.583 [2024-12-05 12:19:37.250696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.583 [2024-12-05 12:19:37.250759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.583 [2024-12-05 12:19:37.250767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.583 [2024-12-05 12:19:37.250771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.583 [2024-12-05 12:19:37.250784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.583 [2024-12-05 12:19:37.250799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.583 [2024-12-05 12:19:37.250818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.583 [2024-12-05 12:19:37.250868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.583 [2024-12-05 12:19:37.250879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.583 [2024-12-05 12:19:37.250883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.583 [2024-12-05 12:19:37.250897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.250905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.583 [2024-12-05 12:19:37.250912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.583 [2024-12-05 12:19:37.250930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.583 [2024-12-05 12:19:37.250989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.583 [2024-12-05 12:19:37.250996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.583 [2024-12-05 12:19:37.250999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.583 [2024-12-05 12:19:37.251023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.583 [2024-12-05 12:19:37.251039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.583 [2024-12-05 12:19:37.251059] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.583 [2024-12-05 12:19:37.251119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.583 [2024-12-05 12:19:37.251126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.583 [2024-12-05 12:19:37.251129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.583 [2024-12-05 12:19:37.251143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.583 [2024-12-05 12:19:37.251157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.583 [2024-12-05 12:19:37.251176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.583 [2024-12-05 12:19:37.251235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.583 [2024-12-05 12:19:37.251249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.583 [2024-12-05 12:19:37.251252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.583 [2024-12-05 12:19:37.251266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.583 [2024-12-05 12:19:37.251292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.583 [2024-12-05 12:19:37.251330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.583 [2024-12-05 12:19:37.251421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.583 [2024-12-05 12:19:37.251428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.583 [2024-12-05 12:19:37.251431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.583 [2024-12-05 12:19:37.251446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.583 [2024-12-05 12:19:37.251461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.583 [2024-12-05 12:19:37.251480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.583 [2024-12-05 12:19:37.251541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.583 [2024-12-05 12:19:37.251548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.583 [2024-12-05 12:19:37.251551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.583 [2024-12-05 12:19:37.251565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.583 [2024-12-05 12:19:37.251580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.583 [2024-12-05 12:19:37.251598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.583 [2024-12-05 12:19:37.251685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.583 [2024-12-05 12:19:37.251691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.583 [2024-12-05 12:19:37.251695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.583 [2024-12-05 12:19:37.251708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.583 [2024-12-05 12:19:37.251722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.583 [2024-12-05 12:19:37.251741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.583 [2024-12-05 12:19:37.251792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.583 [2024-12-05 12:19:37.251799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.583 [2024-12-05 12:19:37.251802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.583 [2024-12-05 12:19:37.251815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.583 [2024-12-05 12:19:37.251820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.251823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.584 [2024-12-05 12:19:37.251830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.584 [2024-12-05 12:19:37.251848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.584 [2024-12-05 12:19:37.251918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.584 [2024-12-05 12:19:37.251925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.584 [2024-12-05 12:19:37.251928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.251932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.584 [2024-12-05 12:19:37.251941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.251946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.251949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.584 [2024-12-05 12:19:37.251956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.584 [2024-12-05 12:19:37.251974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.584 [2024-12-05 12:19:37.252030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.584 [2024-12-05 12:19:37.252037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.584 [2024-12-05 12:19:37.252040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.584 [2024-12-05 12:19:37.252053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.584 [2024-12-05 12:19:37.252068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.584 [2024-12-05 12:19:37.252086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.584 [2024-12-05 12:19:37.252151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.584 [2024-12-05 12:19:37.252158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.584 [2024-12-05 12:19:37.252161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.584 [2024-12-05 12:19:37.252175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.584 [2024-12-05 12:19:37.252190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.584 [2024-12-05 12:19:37.252208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.584 [2024-12-05 12:19:37.252275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.584 [2024-12-05 12:19:37.252317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.584 [2024-12-05 12:19:37.252323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.584 [2024-12-05 12:19:37.252338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.584 [2024-12-05 12:19:37.252354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.584 [2024-12-05 12:19:37.252375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.584 [2024-12-05 12:19:37.252433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.584 [2024-12-05 12:19:37.252439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.584 [2024-12-05 12:19:37.252443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.584 [2024-12-05 12:19:37.252456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.584 [2024-12-05 12:19:37.252471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.584 [2024-12-05 12:19:37.252489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.584 [2024-12-05 12:19:37.252565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.584 [2024-12-05 12:19:37.252572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.584 [2024-12-05 12:19:37.252575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.584 [2024-12-05 12:19:37.252590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.584 [2024-12-05 12:19:37.252605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.584 [2024-12-05 12:19:37.252624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.584 [2024-12-05 12:19:37.252692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.584 [2024-12-05 12:19:37.252699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.584 [2024-12-05 12:19:37.252702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.584 [2024-12-05 12:19:37.252715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252720] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.584 [2024-12-05 12:19:37.252731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.584 [2024-12-05 12:19:37.252749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.584 [2024-12-05 12:19:37.252822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.584 [2024-12-05 12:19:37.252829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.584 [2024-12-05 12:19:37.252832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.584 [2024-12-05 12:19:37.252845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.584 [2024-12-05 12:19:37.252860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.584 [2024-12-05 12:19:37.252878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.584 [2024-12-05 12:19:37.252936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.584 [2024-12-05 12:19:37.252943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.584 [2024-12-05 12:19:37.252946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252950] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.584 [2024-12-05 12:19:37.252959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.252967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.584 [2024-12-05 12:19:37.252974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.584 [2024-12-05 12:19:37.252992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.584 [2024-12-05 12:19:37.253065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.584 [2024-12-05 12:19:37.253071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.584 [2024-12-05 12:19:37.253075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.253078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.584 [2024-12-05 12:19:37.253088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.253092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.584 [2024-12-05 12:19:37.253096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.585 [2024-12-05 12:19:37.253102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.585 [2024-12-05 12:19:37.253121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.585 [2024-12-05 12:19:37.253176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.585 [2024-12-05 12:19:37.253182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.585 [2024-12-05 12:19:37.253186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.585 [2024-12-05 12:19:37.253199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253204] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.585 [2024-12-05 12:19:37.253214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.585 [2024-12-05 12:19:37.253232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.585 [2024-12-05 12:19:37.253288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.585 [2024-12-05 12:19:37.253294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.585 [2024-12-05 12:19:37.253297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.585 [2024-12-05 12:19:37.253310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.585 [2024-12-05 12:19:37.253325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.585 [2024-12-05 12:19:37.253354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.585 [2024-12-05 12:19:37.253414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.585 [2024-12-05 12:19:37.253421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.585 [2024-12-05 12:19:37.253424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.585 [2024-12-05 12:19:37.253438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.585 [2024-12-05 12:19:37.253453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.585 [2024-12-05 12:19:37.253472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.585 [2024-12-05 12:19:37.253520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.585 [2024-12-05 12:19:37.253526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.585 [2024-12-05 12:19:37.253530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.585 [2024-12-05 12:19:37.253543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.585 [2024-12-05 12:19:37.253558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.585 [2024-12-05 12:19:37.253576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.585 [2024-12-05 12:19:37.253631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.585 [2024-12-05 12:19:37.253637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.585 [2024-12-05 12:19:37.253641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.585 [2024-12-05 12:19:37.253654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.585 [2024-12-05 12:19:37.253669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.585 [2024-12-05 12:19:37.253695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.585 [2024-12-05 12:19:37.253752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.585 [2024-12-05 12:19:37.253762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.585 [2024-12-05 12:19:37.253766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.585 [2024-12-05 12:19:37.253780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.585 [2024-12-05 12:19:37.253795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.585 [2024-12-05 12:19:37.253827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.585 [2024-12-05 12:19:37.253885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.585 [2024-12-05 12:19:37.253891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.585 [2024-12-05 12:19:37.253895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.585 [2024-12-05 12:19:37.253908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.253916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.585 [2024-12-05 12:19:37.253923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.585 [2024-12-05 12:19:37.253941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.585 [2024-12-05 12:19:37.254021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.585 [2024-12-05 12:19:37.254027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.585 [2024-12-05 12:19:37.254031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.585 [2024-12-05 12:19:37.254044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.585 [2024-12-05 12:19:37.254059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.585 [2024-12-05 12:19:37.254077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.585 [2024-12-05 12:19:37.254135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.585 [2024-12-05 12:19:37.254141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.585 [2024-12-05 12:19:37.254144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.585 [2024-12-05 12:19:37.254158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.585 [2024-12-05 12:19:37.254172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.585 [2024-12-05 12:19:37.254190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.585 [2024-12-05 12:19:37.254255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.585 [2024-12-05 12:19:37.254261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.585 [2024-12-05 12:19:37.254264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.585 [2024-12-05 12:19:37.254287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.585 [2024-12-05 12:19:37.254320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.585 [2024-12-05 12:19:37.254344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.585 [2024-12-05 12:19:37.254402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.585 [2024-12-05 12:19:37.254409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.585 [2024-12-05 12:19:37.254412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.585 [2024-12-05 12:19:37.254426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.585 [2024-12-05 12:19:37.254442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.585 [2024-12-05 12:19:37.254461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.585 [2024-12-05 12:19:37.254543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.585 [2024-12-05 12:19:37.254555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.585 [2024-12-05 12:19:37.254558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.585 [2024-12-05 12:19:37.254572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.585 [2024-12-05 12:19:37.254581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.586 [2024-12-05 12:19:37.254588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.586 [2024-12-05 12:19:37.254619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.586 [2024-12-05 12:19:37.254681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.586 [2024-12-05 12:19:37.254688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.586 [2024-12-05 12:19:37.254691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.254695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.586 [2024-12-05 12:19:37.254705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.254709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.254713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.586 [2024-12-05 12:19:37.254720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.586 [2024-12-05 12:19:37.254738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.586 [2024-12-05 12:19:37.254796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.586 [2024-12-05 12:19:37.254803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.586 [2024-12-05 12:19:37.254806] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.254810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.586 [2024-12-05 12:19:37.254820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.254824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.254828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.586 [2024-12-05 12:19:37.254835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.586 [2024-12-05 12:19:37.254853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.586 [2024-12-05 12:19:37.254913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.586 [2024-12-05 12:19:37.254920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.586 [2024-12-05 12:19:37.254923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.254927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.586 [2024-12-05 12:19:37.254937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.254942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.254945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.586 [2024-12-05 12:19:37.254952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.586 [2024-12-05 12:19:37.254970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.586 [2024-12-05 12:19:37.255036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.586 [2024-12-05 12:19:37.255044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.586 [2024-12-05 12:19:37.255047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.255051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.586 [2024-12-05 12:19:37.255061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.255066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.255069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.586 [2024-12-05 12:19:37.255076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.586 [2024-12-05 12:19:37.255096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.586 [2024-12-05 12:19:37.255153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.586 [2024-12-05 12:19:37.255160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.586 [2024-12-05 12:19:37.255163] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.255167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.586 [2024-12-05 12:19:37.255176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.255181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.255185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.586 [2024-12-05 12:19:37.255192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.586 [2024-12-05 12:19:37.255210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.586 [2024-12-05 12:19:37.255270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.586 [2024-12-05 12:19:37.255277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.586 [2024-12-05 12:19:37.259294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.259302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.586 [2024-12-05 12:19:37.259331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.259347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.259351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7c2d90) 00:16:06.586 [2024-12-05 12:19:37.259359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.586 [2024-12-05 12:19:37.259387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x803a80, cid 3, qid 0 00:16:06.586 [2024-12-05 12:19:37.259477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.586 [2024-12-05 12:19:37.259484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.586 [2024-12-05 12:19:37.259487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.586 [2024-12-05 12:19:37.259491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x803a80) on tqpair=0x7c2d90 00:16:06.586 [2024-12-05 12:19:37.259499] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 11 milliseconds 00:16:06.586 00:16:06.586 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:06.586 [2024-12-05 12:19:37.298023] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:06.586 [2024-12-05 12:19:37.298089] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86725 ] 00:16:06.850 [2024-12-05 12:19:37.443228] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:16:06.850 [2024-12-05 12:19:37.447304] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:06.850 [2024-12-05 12:19:37.447322] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:06.850 [2024-12-05 12:19:37.447350] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:06.850 [2024-12-05 12:19:37.447360] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:06.850 [2024-12-05 12:19:37.447611] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:16:06.850 [2024-12-05 12:19:37.447650] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8bfd90 0 00:16:06.850 [2024-12-05 12:19:37.455318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:06.850 [2024-12-05 12:19:37.455368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:06.850 [2024-12-05 12:19:37.455388] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:06.850 [2024-12-05 12:19:37.455391] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:06.850 [2024-12-05 12:19:37.455417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.850 [2024-12-05 12:19:37.455423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.850 [2024-12-05 12:19:37.455427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bfd90) 00:16:06.850 [2024-12-05 12:19:37.455437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:06.850 [2024-12-05 12:19:37.455465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900600, cid 0, qid 0 00:16:06.850 [2024-12-05 12:19:37.463310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.850 [2024-12-05 12:19:37.463330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.850 [2024-12-05 12:19:37.463350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.850 [2024-12-05 12:19:37.463354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900600) on tqpair=0x8bfd90 00:16:06.850 [2024-12-05 12:19:37.463382] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:06.850 [2024-12-05 12:19:37.463389] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:16:06.850 [2024-12-05 12:19:37.463395] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:16:06.850 [2024-12-05 12:19:37.463409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.850 [2024-12-05 12:19:37.463415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.850 [2024-12-05 12:19:37.463418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bfd90) 00:16:06.850 [2024-12-05 12:19:37.463426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.850 [2024-12-05 12:19:37.463461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900600, cid 0, qid 0 00:16:06.850 [2024-12-05 12:19:37.463531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.850 [2024-12-05 12:19:37.463538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.850 [2024-12-05 12:19:37.463541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.850 [2024-12-05 12:19:37.463545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900600) on tqpair=0x8bfd90 00:16:06.851 [2024-12-05 12:19:37.463550] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:16:06.851 [2024-12-05 12:19:37.463556] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:16:06.851 [2024-12-05 12:19:37.463563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.463567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.463571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bfd90) 00:16:06.851 [2024-12-05 12:19:37.463577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.851 [2024-12-05 12:19:37.463626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900600, cid 0, qid 0 00:16:06.851 [2024-12-05 12:19:37.463689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.851 [2024-12-05 12:19:37.463696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.851 [2024-12-05 12:19:37.463699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.463703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900600) on tqpair=0x8bfd90 00:16:06.851 [2024-12-05 12:19:37.463709] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:16:06.851 [2024-12-05 12:19:37.463716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:06.851 [2024-12-05 12:19:37.463723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.463727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.463731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bfd90) 00:16:06.851 [2024-12-05 12:19:37.463738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.851 [2024-12-05 12:19:37.463755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900600, cid 0, qid 0 00:16:06.851 [2024-12-05 12:19:37.463815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.851 [2024-12-05 12:19:37.463822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.851 [2024-12-05 12:19:37.463825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.463829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900600) on tqpair=0x8bfd90 00:16:06.851 [2024-12-05 12:19:37.463834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:06.851 [2024-12-05 12:19:37.463844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.463848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.463852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bfd90) 00:16:06.851 [2024-12-05 12:19:37.463859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.851 [2024-12-05 12:19:37.463875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900600, cid 0, qid 0 00:16:06.851 [2024-12-05 12:19:37.463940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.851 [2024-12-05 12:19:37.463946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.851 [2024-12-05 12:19:37.463949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.463953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900600) on tqpair=0x8bfd90 00:16:06.851 [2024-12-05 12:19:37.463958] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:06.851 [2024-12-05 12:19:37.463963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:06.851 [2024-12-05 12:19:37.463970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:06.851 [2024-12-05 12:19:37.464079] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:16:06.851 [2024-12-05 12:19:37.464085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:06.851 [2024-12-05 12:19:37.464093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bfd90) 00:16:06.851 [2024-12-05 12:19:37.464108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.851 [2024-12-05 12:19:37.464127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900600, cid 0, qid 0 00:16:06.851 [2024-12-05 12:19:37.464196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.851 [2024-12-05 12:19:37.464204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.851 [2024-12-05 12:19:37.464208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900600) on tqpair=0x8bfd90 00:16:06.851 [2024-12-05 12:19:37.464217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:06.851 [2024-12-05 12:19:37.464227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bfd90) 00:16:06.851 [2024-12-05 12:19:37.464242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.851 [2024-12-05 12:19:37.464259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900600, cid 0, qid 0 00:16:06.851 [2024-12-05 12:19:37.464328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.851 [2024-12-05 12:19:37.464334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.851 [2024-12-05 12:19:37.464337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900600) on tqpair=0x8bfd90 00:16:06.851 [2024-12-05 12:19:37.464345] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:06.851 [2024-12-05 12:19:37.464350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:06.851 [2024-12-05 12:19:37.464387] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:16:06.851 [2024-12-05 12:19:37.464397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:06.851 [2024-12-05 12:19:37.464406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bfd90) 00:16:06.851 [2024-12-05 12:19:37.464418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.851 [2024-12-05 12:19:37.464439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900600, cid 0, qid 0 00:16:06.851 [2024-12-05 12:19:37.464552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.851 [2024-12-05 12:19:37.464558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.851 [2024-12-05 12:19:37.464562] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464565] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bfd90): datao=0, datal=4096, cccid=0 00:16:06.851 [2024-12-05 12:19:37.464570] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x900600) on tqpair(0x8bfd90): expected_datao=0, payload_size=4096 00:16:06.851 [2024-12-05 12:19:37.464575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464582] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464586] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.851 [2024-12-05 12:19:37.464599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.851 [2024-12-05 12:19:37.464603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900600) on tqpair=0x8bfd90 00:16:06.851 [2024-12-05 12:19:37.464614] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:16:06.851 [2024-12-05 12:19:37.464619] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:16:06.851 [2024-12-05 12:19:37.464624] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:16:06.851 [2024-12-05 12:19:37.464631] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:16:06.851 [2024-12-05 12:19:37.464636] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:16:06.851 [2024-12-05 12:19:37.464641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:16:06.851 [2024-12-05 12:19:37.464649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:06.851 [2024-12-05 12:19:37.464657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bfd90) 00:16:06.851 [2024-12-05 12:19:37.464672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:06.851 [2024-12-05 12:19:37.464692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900600, cid 0, qid 0 00:16:06.851 [2024-12-05 12:19:37.464757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.851 [2024-12-05 12:19:37.464763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.851 [2024-12-05 12:19:37.464767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900600) on tqpair=0x8bfd90 00:16:06.851 [2024-12-05 12:19:37.464777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8bfd90) 00:16:06.851 [2024-12-05 12:19:37.464791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.851 [2024-12-05 12:19:37.464797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.851 [2024-12-05 12:19:37.464804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8bfd90) 00:16:06.851 [2024-12-05 12:19:37.464810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.852 [2024-12-05 12:19:37.464815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.464819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.464822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8bfd90) 00:16:06.852 [2024-12-05 12:19:37.464828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.852 [2024-12-05 12:19:37.464833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.464837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.464840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.852 [2024-12-05 12:19:37.464846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.852 [2024-12-05 12:19:37.464851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.464873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.464879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.464883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bfd90) 00:16:06.852 [2024-12-05 12:19:37.464889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.852 [2024-12-05 12:19:37.464927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900600, cid 0, qid 0 00:16:06.852 [2024-12-05 12:19:37.464934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900780, cid 1, qid 0 00:16:06.852 [2024-12-05 12:19:37.464939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900900, cid 2, qid 0 00:16:06.852 [2024-12-05 12:19:37.464943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.852 [2024-12-05 12:19:37.464947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900c00, cid 4, qid 0 00:16:06.852 [2024-12-05 12:19:37.465044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.852 [2024-12-05 12:19:37.465050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.852 [2024-12-05 12:19:37.465054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900c00) on tqpair=0x8bfd90 00:16:06.852 [2024-12-05 12:19:37.465062] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:16:06.852 [2024-12-05 12:19:37.465068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465089] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bfd90) 00:16:06.852 [2024-12-05 12:19:37.465128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:06.852 [2024-12-05 12:19:37.465146] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900c00, cid 4, qid 0 00:16:06.852 [2024-12-05 12:19:37.465205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.852 [2024-12-05 12:19:37.465211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.852 [2024-12-05 12:19:37.465214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900c00) on tqpair=0x8bfd90 00:16:06.852 [2024-12-05 12:19:37.465292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bfd90) 00:16:06.852 [2024-12-05 12:19:37.465322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.852 [2024-12-05 12:19:37.465341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900c00, cid 4, qid 0 00:16:06.852 [2024-12-05 12:19:37.465431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.852 [2024-12-05 12:19:37.465439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.852 [2024-12-05 12:19:37.465443] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465446] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bfd90): datao=0, datal=4096, cccid=4 00:16:06.852 [2024-12-05 12:19:37.465450] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x900c00) on tqpair(0x8bfd90): expected_datao=0, payload_size=4096 00:16:06.852 [2024-12-05 12:19:37.465455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465462] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465466] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.852 [2024-12-05 12:19:37.465479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.852 [2024-12-05 12:19:37.465482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900c00) on tqpair=0x8bfd90 00:16:06.852 [2024-12-05 12:19:37.465497] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:16:06.852 [2024-12-05 12:19:37.465508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bfd90) 00:16:06.852 [2024-12-05 12:19:37.465538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.852 [2024-12-05 12:19:37.465558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900c00, cid 4, qid 0 00:16:06.852 [2024-12-05 12:19:37.465659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.852 [2024-12-05 12:19:37.465665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.852 [2024-12-05 12:19:37.465669] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465672] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bfd90): datao=0, datal=4096, cccid=4 00:16:06.852 [2024-12-05 12:19:37.465677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x900c00) on tqpair(0x8bfd90): expected_datao=0, payload_size=4096 00:16:06.852 [2024-12-05 12:19:37.465681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465687] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465691] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.852 [2024-12-05 12:19:37.465704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.852 [2024-12-05 12:19:37.465707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900c00) on tqpair=0x8bfd90 00:16:06.852 [2024-12-05 12:19:37.465728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bfd90) 00:16:06.852 [2024-12-05 12:19:37.465758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.852 [2024-12-05 12:19:37.465777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900c00, cid 4, qid 0 00:16:06.852 [2024-12-05 12:19:37.465852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.852 [2024-12-05 12:19:37.465858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.852 [2024-12-05 12:19:37.465861] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465865] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bfd90): datao=0, datal=4096, cccid=4 00:16:06.852 [2024-12-05 12:19:37.465869] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x900c00) on tqpair(0x8bfd90): expected_datao=0, payload_size=4096 00:16:06.852 [2024-12-05 12:19:37.465873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465880] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465883] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465891] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.852 [2024-12-05 12:19:37.465897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.852 [2024-12-05 12:19:37.465900] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.852 [2024-12-05 12:19:37.465904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900c00) on tqpair=0x8bfd90 00:16:06.852 [2024-12-05 12:19:37.465912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:06.852 [2024-12-05 12:19:37.465946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:16:06.853 [2024-12-05 12:19:37.465952] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:16:06.853 [2024-12-05 12:19:37.465956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:16:06.853 [2024-12-05 12:19:37.465961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:16:06.853 [2024-12-05 12:19:37.465981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.465986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bfd90) 00:16:06.853 [2024-12-05 12:19:37.465993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.853 [2024-12-05 12:19:37.466000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8bfd90) 00:16:06.853 [2024-12-05 12:19:37.466013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:06.853 [2024-12-05 12:19:37.466038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900c00, cid 4, qid 0 00:16:06.853 [2024-12-05 12:19:37.466045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900d80, cid 5, qid 0 00:16:06.853 [2024-12-05 12:19:37.466119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.853 [2024-12-05 12:19:37.466126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.853 [2024-12-05 12:19:37.466129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900c00) on tqpair=0x8bfd90 00:16:06.853 [2024-12-05 12:19:37.466139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.853 [2024-12-05 12:19:37.466145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.853 [2024-12-05 12:19:37.466148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900d80) on tqpair=0x8bfd90 00:16:06.853 [2024-12-05 12:19:37.466161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8bfd90) 00:16:06.853 [2024-12-05 12:19:37.466172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.853 [2024-12-05 12:19:37.466189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900d80, cid 5, qid 0 00:16:06.853 [2024-12-05 12:19:37.466249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.853 [2024-12-05 12:19:37.466255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.853 [2024-12-05 12:19:37.466258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900d80) on tqpair=0x8bfd90 00:16:06.853 [2024-12-05 12:19:37.466271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8bfd90) 00:16:06.853 [2024-12-05 12:19:37.466295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.853 [2024-12-05 12:19:37.466315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900d80, cid 5, qid 0 00:16:06.853 [2024-12-05 12:19:37.466380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.853 [2024-12-05 12:19:37.466387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.853 [2024-12-05 12:19:37.466390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900d80) on tqpair=0x8bfd90 00:16:06.853 [2024-12-05 12:19:37.466404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8bfd90) 00:16:06.853 [2024-12-05 12:19:37.466414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.853 [2024-12-05 12:19:37.466431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900d80, cid 5, qid 0 00:16:06.853 [2024-12-05 12:19:37.466486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.853 [2024-12-05 12:19:37.466493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.853 [2024-12-05 12:19:37.466496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900d80) on tqpair=0x8bfd90 00:16:06.853 [2024-12-05 12:19:37.466517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8bfd90) 00:16:06.853 [2024-12-05 12:19:37.466529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.853 [2024-12-05 12:19:37.466536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8bfd90) 00:16:06.853 [2024-12-05 12:19:37.466546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.853 [2024-12-05 12:19:37.466553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8bfd90) 00:16:06.853 [2024-12-05 12:19:37.466563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.853 [2024-12-05 12:19:37.466570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8bfd90) 00:16:06.853 [2024-12-05 12:19:37.466579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.853 [2024-12-05 12:19:37.466599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900d80, cid 5, qid 0 00:16:06.853 [2024-12-05 12:19:37.466605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900c00, cid 4, qid 0 00:16:06.853 [2024-12-05 12:19:37.466610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900f00, cid 6, qid 0 00:16:06.853 [2024-12-05 12:19:37.466615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x901080, cid 7, qid 0 00:16:06.853 [2024-12-05 12:19:37.466770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.853 [2024-12-05 12:19:37.466777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.853 [2024-12-05 12:19:37.466780] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466784] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bfd90): datao=0, datal=8192, cccid=5 00:16:06.853 [2024-12-05 12:19:37.466788] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x900d80) on tqpair(0x8bfd90): expected_datao=0, payload_size=8192 00:16:06.853 [2024-12-05 12:19:37.466792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466807] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466812] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.853 [2024-12-05 12:19:37.466823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.853 [2024-12-05 12:19:37.466826] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466830] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bfd90): datao=0, datal=512, cccid=4 00:16:06.853 [2024-12-05 12:19:37.466834] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x900c00) on tqpair(0x8bfd90): expected_datao=0, payload_size=512 00:16:06.853 [2024-12-05 12:19:37.466838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466843] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466847] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.853 [2024-12-05 12:19:37.466857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.853 [2024-12-05 12:19:37.466861] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466864] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bfd90): datao=0, datal=512, cccid=6 00:16:06.853 [2024-12-05 12:19:37.466868] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x900f00) on tqpair(0x8bfd90): expected_datao=0, payload_size=512 00:16:06.853 [2024-12-05 12:19:37.466872] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466877] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466881] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:06.853 [2024-12-05 12:19:37.466891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:06.853 [2024-12-05 12:19:37.466895] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466898] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8bfd90): datao=0, datal=4096, cccid=7 00:16:06.853 [2024-12-05 12:19:37.466902] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x901080) on tqpair(0x8bfd90): expected_datao=0, payload_size=4096 00:16:06.853 [2024-12-05 12:19:37.466906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466912] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466916] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.853 [2024-12-05 12:19:37.466929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.853 [2024-12-05 12:19:37.466932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900d80) on tqpair=0x8bfd90 00:16:06.853 [2024-12-05 12:19:37.466949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.853 [2024-12-05 12:19:37.466955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.853 [2024-12-05 12:19:37.466958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900c00) on tqpair=0x8bfd90 00:16:06.853 [2024-12-05 12:19:37.466972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.853 [2024-12-05 12:19:37.466978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.853 [2024-12-05 12:19:37.466981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.853 [2024-12-05 12:19:37.466985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900f00) on tqpair=0x8bfd90 00:16:06.854 [2024-12-05 12:19:37.466991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.854 [2024-12-05 12:19:37.466997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.854 [2024-12-05 12:19:37.467000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.854 [2024-12-05 12:19:37.467004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x901080) on tqpair=0x8bfd90 00:16:06.854 ===================================================== 00:16:06.854 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:06.854 ===================================================== 00:16:06.854 Controller Capabilities/Features 00:16:06.854 ================================ 00:16:06.854 Vendor ID: 8086 00:16:06.854 Subsystem Vendor ID: 8086 00:16:06.854 Serial Number: SPDK00000000000001 00:16:06.854 Model Number: SPDK bdev Controller 00:16:06.854 Firmware Version: 25.01 00:16:06.854 Recommended Arb Burst: 6 00:16:06.854 IEEE OUI Identifier: e4 d2 5c 00:16:06.854 Multi-path I/O 00:16:06.854 May have multiple subsystem ports: Yes 00:16:06.854 May have multiple controllers: Yes 00:16:06.854 Associated with SR-IOV VF: No 00:16:06.854 Max Data Transfer Size: 131072 00:16:06.854 Max Number of Namespaces: 32 00:16:06.854 Max Number of I/O Queues: 127 00:16:06.854 NVMe Specification Version (VS): 1.3 00:16:06.854 NVMe Specification Version (Identify): 1.3 00:16:06.854 Maximum Queue Entries: 128 00:16:06.854 Contiguous Queues Required: Yes 00:16:06.854 Arbitration Mechanisms Supported 00:16:06.854 Weighted Round Robin: Not Supported 00:16:06.854 Vendor Specific: Not Supported 00:16:06.854 Reset Timeout: 15000 ms 00:16:06.854 Doorbell Stride: 4 bytes 00:16:06.854 NVM Subsystem Reset: Not Supported 00:16:06.854 Command Sets Supported 00:16:06.854 NVM Command Set: Supported 00:16:06.854 Boot Partition: Not Supported 00:16:06.854 Memory Page Size Minimum: 4096 bytes 00:16:06.854 Memory Page Size Maximum: 4096 bytes 00:16:06.854 Persistent Memory Region: Not Supported 00:16:06.854 Optional Asynchronous Events Supported 00:16:06.854 Namespace Attribute Notices: Supported 00:16:06.854 Firmware Activation Notices: Not Supported 00:16:06.854 ANA Change Notices: Not Supported 00:16:06.854 PLE Aggregate Log Change Notices: Not Supported 00:16:06.854 LBA Status Info Alert Notices: Not Supported 00:16:06.854 EGE Aggregate Log Change Notices: Not Supported 00:16:06.854 Normal NVM Subsystem Shutdown event: Not Supported 00:16:06.854 Zone Descriptor Change Notices: Not Supported 00:16:06.854 Discovery Log Change Notices: Not Supported 00:16:06.854 Controller Attributes 00:16:06.854 128-bit Host Identifier: Supported 00:16:06.854 Non-Operational Permissive Mode: Not Supported 00:16:06.854 NVM Sets: Not Supported 00:16:06.854 Read Recovery Levels: Not Supported 00:16:06.854 Endurance Groups: Not Supported 00:16:06.854 Predictable Latency Mode: Not Supported 00:16:06.854 Traffic Based Keep ALive: Not Supported 00:16:06.854 Namespace Granularity: Not Supported 00:16:06.854 SQ Associations: Not Supported 00:16:06.854 UUID List: Not Supported 00:16:06.854 Multi-Domain Subsystem: Not Supported 00:16:06.854 Fixed Capacity Management: Not Supported 00:16:06.854 Variable Capacity Management: Not Supported 00:16:06.854 Delete Endurance Group: Not Supported 00:16:06.854 Delete NVM Set: Not Supported 00:16:06.854 Extended LBA Formats Supported: Not Supported 00:16:06.854 Flexible Data Placement Supported: Not Supported 00:16:06.854 00:16:06.854 Controller Memory Buffer Support 00:16:06.854 ================================ 00:16:06.854 Supported: No 00:16:06.854 00:16:06.854 Persistent Memory Region Support 00:16:06.854 ================================ 00:16:06.854 Supported: No 00:16:06.854 00:16:06.854 Admin Command Set Attributes 00:16:06.854 ============================ 00:16:06.854 Security Send/Receive: Not Supported 00:16:06.854 Format NVM: Not Supported 00:16:06.854 Firmware Activate/Download: Not Supported 00:16:06.854 Namespace Management: Not Supported 00:16:06.854 Device Self-Test: Not Supported 00:16:06.854 Directives: Not Supported 00:16:06.854 NVMe-MI: Not Supported 00:16:06.854 Virtualization Management: Not Supported 00:16:06.854 Doorbell Buffer Config: Not Supported 00:16:06.854 Get LBA Status Capability: Not Supported 00:16:06.854 Command & Feature Lockdown Capability: Not Supported 00:16:06.854 Abort Command Limit: 4 00:16:06.854 Async Event Request Limit: 4 00:16:06.854 Number of Firmware Slots: N/A 00:16:06.854 Firmware Slot 1 Read-Only: N/A 00:16:06.854 Firmware Activation Without Reset: N/A 00:16:06.854 Multiple Update Detection Support: N/A 00:16:06.854 Firmware Update Granularity: No Information Provided 00:16:06.854 Per-Namespace SMART Log: No 00:16:06.854 Asymmetric Namespace Access Log Page: Not Supported 00:16:06.854 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:06.854 Command Effects Log Page: Supported 00:16:06.854 Get Log Page Extended Data: Supported 00:16:06.854 Telemetry Log Pages: Not Supported 00:16:06.854 Persistent Event Log Pages: Not Supported 00:16:06.854 Supported Log Pages Log Page: May Support 00:16:06.854 Commands Supported & Effects Log Page: Not Supported 00:16:06.854 Feature Identifiers & Effects Log Page:May Support 00:16:06.854 NVMe-MI Commands & Effects Log Page: May Support 00:16:06.854 Data Area 4 for Telemetry Log: Not Supported 00:16:06.854 Error Log Page Entries Supported: 128 00:16:06.854 Keep Alive: Supported 00:16:06.854 Keep Alive Granularity: 10000 ms 00:16:06.854 00:16:06.854 NVM Command Set Attributes 00:16:06.854 ========================== 00:16:06.854 Submission Queue Entry Size 00:16:06.854 Max: 64 00:16:06.854 Min: 64 00:16:06.854 Completion Queue Entry Size 00:16:06.854 Max: 16 00:16:06.854 Min: 16 00:16:06.854 Number of Namespaces: 32 00:16:06.854 Compare Command: Supported 00:16:06.854 Write Uncorrectable Command: Not Supported 00:16:06.854 Dataset Management Command: Supported 00:16:06.854 Write Zeroes Command: Supported 00:16:06.854 Set Features Save Field: Not Supported 00:16:06.854 Reservations: Supported 00:16:06.854 Timestamp: Not Supported 00:16:06.854 Copy: Supported 00:16:06.854 Volatile Write Cache: Present 00:16:06.854 Atomic Write Unit (Normal): 1 00:16:06.854 Atomic Write Unit (PFail): 1 00:16:06.854 Atomic Compare & Write Unit: 1 00:16:06.854 Fused Compare & Write: Supported 00:16:06.854 Scatter-Gather List 00:16:06.854 SGL Command Set: Supported 00:16:06.854 SGL Keyed: Supported 00:16:06.854 SGL Bit Bucket Descriptor: Not Supported 00:16:06.854 SGL Metadata Pointer: Not Supported 00:16:06.854 Oversized SGL: Not Supported 00:16:06.854 SGL Metadata Address: Not Supported 00:16:06.854 SGL Offset: Supported 00:16:06.854 Transport SGL Data Block: Not Supported 00:16:06.854 Replay Protected Memory Block: Not Supported 00:16:06.854 00:16:06.854 Firmware Slot Information 00:16:06.854 ========================= 00:16:06.854 Active slot: 1 00:16:06.854 Slot 1 Firmware Revision: 25.01 00:16:06.854 00:16:06.854 00:16:06.854 Commands Supported and Effects 00:16:06.854 ============================== 00:16:06.854 Admin Commands 00:16:06.854 -------------- 00:16:06.854 Get Log Page (02h): Supported 00:16:06.854 Identify (06h): Supported 00:16:06.854 Abort (08h): Supported 00:16:06.854 Set Features (09h): Supported 00:16:06.854 Get Features (0Ah): Supported 00:16:06.854 Asynchronous Event Request (0Ch): Supported 00:16:06.854 Keep Alive (18h): Supported 00:16:06.854 I/O Commands 00:16:06.854 ------------ 00:16:06.854 Flush (00h): Supported LBA-Change 00:16:06.854 Write (01h): Supported LBA-Change 00:16:06.854 Read (02h): Supported 00:16:06.854 Compare (05h): Supported 00:16:06.854 Write Zeroes (08h): Supported LBA-Change 00:16:06.854 Dataset Management (09h): Supported LBA-Change 00:16:06.854 Copy (19h): Supported LBA-Change 00:16:06.854 00:16:06.854 Error Log 00:16:06.854 ========= 00:16:06.854 00:16:06.854 Arbitration 00:16:06.854 =========== 00:16:06.854 Arbitration Burst: 1 00:16:06.854 00:16:06.854 Power Management 00:16:06.854 ================ 00:16:06.854 Number of Power States: 1 00:16:06.854 Current Power State: Power State #0 00:16:06.854 Power State #0: 00:16:06.854 Max Power: 0.00 W 00:16:06.854 Non-Operational State: Operational 00:16:06.854 Entry Latency: Not Reported 00:16:06.854 Exit Latency: Not Reported 00:16:06.854 Relative Read Throughput: 0 00:16:06.854 Relative Read Latency: 0 00:16:06.854 Relative Write Throughput: 0 00:16:06.854 Relative Write Latency: 0 00:16:06.854 Idle Power: Not Reported 00:16:06.854 Active Power: Not Reported 00:16:06.854 Non-Operational Permissive Mode: Not Supported 00:16:06.854 00:16:06.854 Health Information 00:16:06.854 ================== 00:16:06.854 Critical Warnings: 00:16:06.854 Available Spare Space: OK 00:16:06.854 Temperature: OK 00:16:06.854 Device Reliability: OK 00:16:06.854 Read Only: No 00:16:06.854 Volatile Memory Backup: OK 00:16:06.855 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:06.855 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:06.855 Available Spare: 0% 00:16:06.855 Available Spare Threshold: 0% 00:16:06.855 Life Percentage Used:[2024-12-05 12:19:37.467118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.467126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8bfd90) 00:16:06.855 [2024-12-05 12:19:37.467133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.855 [2024-12-05 12:19:37.467156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x901080, cid 7, qid 0 00:16:06.855 [2024-12-05 12:19:37.467230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.855 [2024-12-05 12:19:37.467237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.855 [2024-12-05 12:19:37.467240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.467244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x901080) on tqpair=0x8bfd90 00:16:06.855 [2024-12-05 12:19:37.474331] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:16:06.855 [2024-12-05 12:19:37.474378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900600) on tqpair=0x8bfd90 00:16:06.855 [2024-12-05 12:19:37.474386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.855 [2024-12-05 12:19:37.474393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900780) on tqpair=0x8bfd90 00:16:06.855 [2024-12-05 12:19:37.474397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.855 [2024-12-05 12:19:37.474402] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900900) on tqpair=0x8bfd90 00:16:06.855 [2024-12-05 12:19:37.474406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.855 [2024-12-05 12:19:37.474411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.855 [2024-12-05 12:19:37.474415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:06.855 [2024-12-05 12:19:37.474425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.855 [2024-12-05 12:19:37.474442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.855 [2024-12-05 12:19:37.474473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.855 [2024-12-05 12:19:37.474561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.855 [2024-12-05 12:19:37.474568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.855 [2024-12-05 12:19:37.474572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.855 [2024-12-05 12:19:37.474583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.855 [2024-12-05 12:19:37.474598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.855 [2024-12-05 12:19:37.474637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.855 [2024-12-05 12:19:37.474726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.855 [2024-12-05 12:19:37.474733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.855 [2024-12-05 12:19:37.474736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474740] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.855 [2024-12-05 12:19:37.474744] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:16:06.855 [2024-12-05 12:19:37.474749] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:16:06.855 [2024-12-05 12:19:37.474758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.855 [2024-12-05 12:19:37.474773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.855 [2024-12-05 12:19:37.474790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.855 [2024-12-05 12:19:37.474849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.855 [2024-12-05 12:19:37.474856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.855 [2024-12-05 12:19:37.474859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.855 [2024-12-05 12:19:37.474873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.855 [2024-12-05 12:19:37.474887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.855 [2024-12-05 12:19:37.474904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.855 [2024-12-05 12:19:37.474965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.855 [2024-12-05 12:19:37.474971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.855 [2024-12-05 12:19:37.474974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.855 [2024-12-05 12:19:37.474987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.474995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.855 [2024-12-05 12:19:37.475002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.855 [2024-12-05 12:19:37.475045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.855 [2024-12-05 12:19:37.475114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.855 [2024-12-05 12:19:37.475121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.855 [2024-12-05 12:19:37.475124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.475128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.855 [2024-12-05 12:19:37.475139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.475143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.475147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.855 [2024-12-05 12:19:37.475154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.855 [2024-12-05 12:19:37.475171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.855 [2024-12-05 12:19:37.475235] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.855 [2024-12-05 12:19:37.475241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.855 [2024-12-05 12:19:37.475245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.475249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.855 [2024-12-05 12:19:37.475259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.475263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.475267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.855 [2024-12-05 12:19:37.475274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.855 [2024-12-05 12:19:37.475321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.855 [2024-12-05 12:19:37.475375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.855 [2024-12-05 12:19:37.475381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.855 [2024-12-05 12:19:37.475385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.475389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.855 [2024-12-05 12:19:37.475400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.475404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.855 [2024-12-05 12:19:37.475408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.856 [2024-12-05 12:19:37.475416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.856 [2024-12-05 12:19:37.475435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.856 [2024-12-05 12:19:37.475517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.856 [2024-12-05 12:19:37.475528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.856 [2024-12-05 12:19:37.475532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.475536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.856 [2024-12-05 12:19:37.475547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.475551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.475555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.856 [2024-12-05 12:19:37.475563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.856 [2024-12-05 12:19:37.475582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.856 [2024-12-05 12:19:37.475649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.856 [2024-12-05 12:19:37.475655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.856 [2024-12-05 12:19:37.475659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.475677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.856 [2024-12-05 12:19:37.475687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.475691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.475694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.856 [2024-12-05 12:19:37.475701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.856 [2024-12-05 12:19:37.475718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.856 [2024-12-05 12:19:37.475775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.856 [2024-12-05 12:19:37.475785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.856 [2024-12-05 12:19:37.475789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.475793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.856 [2024-12-05 12:19:37.475803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.475807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.475811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.856 [2024-12-05 12:19:37.475818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.856 [2024-12-05 12:19:37.475835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.856 [2024-12-05 12:19:37.475892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.856 [2024-12-05 12:19:37.475906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.856 [2024-12-05 12:19:37.475910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.475914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.856 [2024-12-05 12:19:37.475924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.475929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.475932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.856 [2024-12-05 12:19:37.475939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.856 [2024-12-05 12:19:37.475958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.856 [2024-12-05 12:19:37.476030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.856 [2024-12-05 12:19:37.476045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.856 [2024-12-05 12:19:37.476049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.856 [2024-12-05 12:19:37.476063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.856 [2024-12-05 12:19:37.476078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.856 [2024-12-05 12:19:37.476096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.856 [2024-12-05 12:19:37.476172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.856 [2024-12-05 12:19:37.476178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.856 [2024-12-05 12:19:37.476181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.856 [2024-12-05 12:19:37.476195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.856 [2024-12-05 12:19:37.476209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.856 [2024-12-05 12:19:37.476226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.856 [2024-12-05 12:19:37.476279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.856 [2024-12-05 12:19:37.476285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.856 [2024-12-05 12:19:37.476304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.856 [2024-12-05 12:19:37.476330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.856 [2024-12-05 12:19:37.476345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.856 [2024-12-05 12:19:37.476364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.856 [2024-12-05 12:19:37.476445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.856 [2024-12-05 12:19:37.476451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.856 [2024-12-05 12:19:37.476455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.856 [2024-12-05 12:19:37.476468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.856 [2024-12-05 12:19:37.476483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.856 [2024-12-05 12:19:37.476501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.856 [2024-12-05 12:19:37.476558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.856 [2024-12-05 12:19:37.476564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.856 [2024-12-05 12:19:37.476568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.856 [2024-12-05 12:19:37.476582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.856 [2024-12-05 12:19:37.476597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.856 [2024-12-05 12:19:37.476614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.856 [2024-12-05 12:19:37.476688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.856 [2024-12-05 12:19:37.476694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.856 [2024-12-05 12:19:37.476698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.856 [2024-12-05 12:19:37.476711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.856 [2024-12-05 12:19:37.476726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.856 [2024-12-05 12:19:37.476742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.856 [2024-12-05 12:19:37.476813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.856 [2024-12-05 12:19:37.476819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.856 [2024-12-05 12:19:37.476823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.856 [2024-12-05 12:19:37.476836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.856 [2024-12-05 12:19:37.476851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.856 [2024-12-05 12:19:37.476867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.856 [2024-12-05 12:19:37.476930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.856 [2024-12-05 12:19:37.476936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.856 [2024-12-05 12:19:37.476939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.856 [2024-12-05 12:19:37.476952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.856 [2024-12-05 12:19:37.476960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.857 [2024-12-05 12:19:37.476967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.857 [2024-12-05 12:19:37.476983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.857 [2024-12-05 12:19:37.477040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.857 [2024-12-05 12:19:37.477047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.857 [2024-12-05 12:19:37.477050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.857 [2024-12-05 12:19:37.477063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.857 [2024-12-05 12:19:37.477077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.857 [2024-12-05 12:19:37.477094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.857 [2024-12-05 12:19:37.477149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.857 [2024-12-05 12:19:37.477155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.857 [2024-12-05 12:19:37.477158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.857 [2024-12-05 12:19:37.477171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.857 [2024-12-05 12:19:37.477186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.857 [2024-12-05 12:19:37.477202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.857 [2024-12-05 12:19:37.477270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.857 [2024-12-05 12:19:37.477276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.857 [2024-12-05 12:19:37.477279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.857 [2024-12-05 12:19:37.477318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.857 [2024-12-05 12:19:37.477335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.857 [2024-12-05 12:19:37.477354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.857 [2024-12-05 12:19:37.477421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.857 [2024-12-05 12:19:37.477436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.857 [2024-12-05 12:19:37.477440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.857 [2024-12-05 12:19:37.477455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.857 [2024-12-05 12:19:37.477470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.857 [2024-12-05 12:19:37.477489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.857 [2024-12-05 12:19:37.477547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.857 [2024-12-05 12:19:37.477557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.857 [2024-12-05 12:19:37.477561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.857 [2024-12-05 12:19:37.477576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.857 [2024-12-05 12:19:37.477591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.857 [2024-12-05 12:19:37.477609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.857 [2024-12-05 12:19:37.477687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.857 [2024-12-05 12:19:37.477693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.857 [2024-12-05 12:19:37.477697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.857 [2024-12-05 12:19:37.477710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.857 [2024-12-05 12:19:37.477724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.857 [2024-12-05 12:19:37.477741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.857 [2024-12-05 12:19:37.477797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.857 [2024-12-05 12:19:37.477803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.857 [2024-12-05 12:19:37.477807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.857 [2024-12-05 12:19:37.477820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.857 [2024-12-05 12:19:37.477834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.857 [2024-12-05 12:19:37.477851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.857 [2024-12-05 12:19:37.477906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.857 [2024-12-05 12:19:37.477912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.857 [2024-12-05 12:19:37.477916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.857 [2024-12-05 12:19:37.477929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477933] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.477937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.857 [2024-12-05 12:19:37.477944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.857 [2024-12-05 12:19:37.477960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.857 [2024-12-05 12:19:37.478021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.857 [2024-12-05 12:19:37.478027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.857 [2024-12-05 12:19:37.478031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.478034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.857 [2024-12-05 12:19:37.478044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.478048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.478052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.857 [2024-12-05 12:19:37.478059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.857 [2024-12-05 12:19:37.478076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.857 [2024-12-05 12:19:37.478140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.857 [2024-12-05 12:19:37.478147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.857 [2024-12-05 12:19:37.478150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.478154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.857 [2024-12-05 12:19:37.478163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.478167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.478171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.857 [2024-12-05 12:19:37.478178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.857 [2024-12-05 12:19:37.478194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.857 [2024-12-05 12:19:37.478259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.857 [2024-12-05 12:19:37.478265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.857 [2024-12-05 12:19:37.478269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.478272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.857 [2024-12-05 12:19:37.478282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.478286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.478305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8bfd90) 00:16:06.857 [2024-12-05 12:19:37.482342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:06.857 [2024-12-05 12:19:37.482371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x900a80, cid 3, qid 0 00:16:06.857 [2024-12-05 12:19:37.482431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:06.857 [2024-12-05 12:19:37.482438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:06.857 [2024-12-05 12:19:37.482442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:06.857 [2024-12-05 12:19:37.482446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x900a80) on tqpair=0x8bfd90 00:16:06.857 [2024-12-05 12:19:37.482454] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:16:06.857 0% 00:16:06.857 Data Units Read: 0 00:16:06.857 Data Units Written: 0 00:16:06.857 Host Read Commands: 0 00:16:06.857 Host Write Commands: 0 00:16:06.857 Controller Busy Time: 0 minutes 00:16:06.858 Power Cycles: 0 00:16:06.858 Power On Hours: 0 hours 00:16:06.858 Unsafe Shutdowns: 0 00:16:06.858 Unrecoverable Media Errors: 0 00:16:06.858 Lifetime Error Log Entries: 0 00:16:06.858 Warning Temperature Time: 0 minutes 00:16:06.858 Critical Temperature Time: 0 minutes 00:16:06.858 00:16:06.858 Number of Queues 00:16:06.858 ================ 00:16:06.858 Number of I/O Submission Queues: 127 00:16:06.858 Number of I/O Completion Queues: 127 00:16:06.858 00:16:06.858 Active Namespaces 00:16:06.858 ================= 00:16:06.858 Namespace ID:1 00:16:06.858 Error Recovery Timeout: Unlimited 00:16:06.858 Command Set Identifier: NVM (00h) 00:16:06.858 Deallocate: Supported 00:16:06.858 Deallocated/Unwritten Error: Not Supported 00:16:06.858 Deallocated Read Value: Unknown 00:16:06.858 Deallocate in Write Zeroes: Not Supported 00:16:06.858 Deallocated Guard Field: 0xFFFF 00:16:06.858 Flush: Supported 00:16:06.858 Reservation: Supported 00:16:06.858 Namespace Sharing Capabilities: Multiple Controllers 00:16:06.858 Size (in LBAs): 131072 (0GiB) 00:16:06.858 Capacity (in LBAs): 131072 (0GiB) 00:16:06.858 Utilization (in LBAs): 131072 (0GiB) 00:16:06.858 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:06.858 EUI64: ABCDEF0123456789 00:16:06.858 UUID: 89b40487-51b1-4ccb-b759-787e19c41715 00:16:06.858 Thin Provisioning: Not Supported 00:16:06.858 Per-NS Atomic Units: Yes 00:16:06.858 Atomic Boundary Size (Normal): 0 00:16:06.858 Atomic Boundary Size (PFail): 0 00:16:06.858 Atomic Boundary Offset: 0 00:16:06.858 Maximum Single Source Range Length: 65535 00:16:06.858 Maximum Copy Length: 65535 00:16:06.858 Maximum Source Range Count: 1 00:16:06.858 NGUID/EUI64 Never Reused: No 00:16:06.858 Namespace Write Protected: No 00:16:06.858 Number of LBA Formats: 1 00:16:06.858 Current LBA Format: LBA Format #00 00:16:06.858 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:06.858 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:06.858 rmmod nvme_tcp 00:16:06.858 rmmod nvme_fabrics 00:16:06.858 rmmod nvme_keyring 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 86678 ']' 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 86678 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 86678 ']' 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 86678 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86678 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.858 killing process with pid 86678 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86678' 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 86678 00:16:06.858 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 86678 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:07.117 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:07.376 12:19:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:16:07.376 00:16:07.376 real 0m2.377s 00:16:07.376 user 0m4.993s 00:16:07.376 sys 0m0.798s 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:07.376 ************************************ 00:16:07.376 END TEST nvmf_identify 00:16:07.376 ************************************ 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.376 ************************************ 00:16:07.376 START TEST nvmf_perf 00:16:07.376 ************************************ 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:07.376 * Looking for test storage... 00:16:07.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:07.376 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:07.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.637 --rc genhtml_branch_coverage=1 00:16:07.637 --rc genhtml_function_coverage=1 00:16:07.637 --rc genhtml_legend=1 00:16:07.637 --rc geninfo_all_blocks=1 00:16:07.637 --rc geninfo_unexecuted_blocks=1 00:16:07.637 00:16:07.637 ' 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:07.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.637 --rc genhtml_branch_coverage=1 00:16:07.637 --rc genhtml_function_coverage=1 00:16:07.637 --rc genhtml_legend=1 00:16:07.637 --rc geninfo_all_blocks=1 00:16:07.637 --rc geninfo_unexecuted_blocks=1 00:16:07.637 00:16:07.637 ' 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:07.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.637 --rc genhtml_branch_coverage=1 00:16:07.637 --rc genhtml_function_coverage=1 00:16:07.637 --rc genhtml_legend=1 00:16:07.637 --rc geninfo_all_blocks=1 00:16:07.637 --rc geninfo_unexecuted_blocks=1 00:16:07.637 00:16:07.637 ' 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:07.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.637 --rc genhtml_branch_coverage=1 00:16:07.637 --rc genhtml_function_coverage=1 00:16:07.637 --rc genhtml_legend=1 00:16:07.637 --rc geninfo_all_blocks=1 00:16:07.637 --rc geninfo_unexecuted_blocks=1 00:16:07.637 00:16:07.637 ' 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.637 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:07.638 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:07.638 Cannot find device "nvmf_init_br" 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:07.638 Cannot find device "nvmf_init_br2" 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:07.638 Cannot find device "nvmf_tgt_br" 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.638 Cannot find device "nvmf_tgt_br2" 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:07.638 Cannot find device "nvmf_init_br" 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:07.638 Cannot find device "nvmf_init_br2" 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:07.638 Cannot find device "nvmf_tgt_br" 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:07.638 Cannot find device "nvmf_tgt_br2" 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:07.638 Cannot find device "nvmf_br" 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:07.638 Cannot find device "nvmf_init_if" 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:07.638 Cannot find device "nvmf_init_if2" 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:07.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:07.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:07.638 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:07.898 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:07.898 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:07.898 00:16:07.898 --- 10.0.0.3 ping statistics --- 00:16:07.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.898 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:07.898 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:07.898 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:16:07.898 00:16:07.898 --- 10.0.0.4 ping statistics --- 00:16:07.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.898 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:07.898 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:08.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:08.157 00:16:08.157 --- 10.0.0.1 ping statistics --- 00:16:08.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.157 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:08.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:16:08.157 00:16:08.157 --- 10.0.0.2 ping statistics --- 00:16:08.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.157 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=86940 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 86940 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 86940 ']' 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.157 12:19:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:08.157 [2024-12-05 12:19:38.870739] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:08.157 [2024-12-05 12:19:38.870821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.157 [2024-12-05 12:19:39.018642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.416 [2024-12-05 12:19:39.064897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.416 [2024-12-05 12:19:39.065395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.416 [2024-12-05 12:19:39.065661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.416 [2024-12-05 12:19:39.065930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.416 [2024-12-05 12:19:39.066164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.416 [2024-12-05 12:19:39.067478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.416 [2024-12-05 12:19:39.067592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.416 [2024-12-05 12:19:39.068322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.416 [2024-12-05 12:19:39.068357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.416 12:19:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.416 12:19:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:16:08.416 12:19:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:08.416 12:19:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:08.416 12:19:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:08.416 12:19:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.416 12:19:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:08.416 12:19:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:08.983 12:19:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:08.983 12:19:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:09.242 12:19:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:09.242 12:19:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:09.500 12:19:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:09.500 12:19:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:09.759 12:19:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:09.760 12:19:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:09.760 12:19:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:09.760 [2024-12-05 12:19:40.588813] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.760 12:19:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:10.018 12:19:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:10.018 12:19:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:10.277 12:19:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:10.277 12:19:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:10.845 12:19:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:10.845 [2024-12-05 12:19:41.621519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:10.845 12:19:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:11.104 12:19:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:11.104 12:19:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:11.104 12:19:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:11.104 12:19:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:12.481 Initializing NVMe Controllers 00:16:12.481 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:12.481 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:12.481 Initialization complete. Launching workers. 00:16:12.481 ======================================================== 00:16:12.481 Latency(us) 00:16:12.481 Device Information : IOPS MiB/s Average min max 00:16:12.481 PCIE (0000:00:10.0) NSID 1 from core 0: 20416.00 79.75 1567.03 392.57 9062.72 00:16:12.481 ======================================================== 00:16:12.481 Total : 20416.00 79.75 1567.03 392.57 9062.72 00:16:12.481 00:16:12.481 12:19:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:13.417 Initializing NVMe Controllers 00:16:13.417 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:13.417 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:13.417 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:13.417 Initialization complete. Launching workers. 00:16:13.417 ======================================================== 00:16:13.417 Latency(us) 00:16:13.417 Device Information : IOPS MiB/s Average min max 00:16:13.417 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3255.86 12.72 306.90 109.67 5193.62 00:16:13.417 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8095.46 4002.26 12040.38 00:16:13.417 ======================================================== 00:16:13.417 Total : 3380.35 13.20 593.75 109.67 12040.38 00:16:13.417 00:16:13.676 12:19:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:15.052 Initializing NVMe Controllers 00:16:15.052 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:15.052 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:15.052 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:15.052 Initialization complete. Launching workers. 00:16:15.052 ======================================================== 00:16:15.052 Latency(us) 00:16:15.052 Device Information : IOPS MiB/s Average min max 00:16:15.052 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9773.02 38.18 3274.26 323.14 7459.28 00:16:15.052 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2719.17 10.62 11890.19 6251.17 20192.74 00:16:15.052 ======================================================== 00:16:15.052 Total : 12492.19 48.80 5149.68 323.14 20192.74 00:16:15.052 00:16:15.052 12:19:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:15.052 12:19:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:17.581 Initializing NVMe Controllers 00:16:17.581 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:17.581 Controller IO queue size 128, less than required. 00:16:17.581 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:17.581 Controller IO queue size 128, less than required. 00:16:17.581 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:17.581 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:17.581 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:17.581 Initialization complete. Launching workers. 00:16:17.581 ======================================================== 00:16:17.581 Latency(us) 00:16:17.581 Device Information : IOPS MiB/s Average min max 00:16:17.581 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1937.91 484.48 66754.19 46001.80 114313.86 00:16:17.581 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 599.97 149.99 220170.41 85811.17 326661.20 00:16:17.581 ======================================================== 00:16:17.582 Total : 2537.88 634.47 103022.80 46001.80 326661.20 00:16:17.582 00:16:17.582 12:19:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:16:17.582 Initializing NVMe Controllers 00:16:17.582 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:17.582 Controller IO queue size 128, less than required. 00:16:17.582 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:17.582 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:17.582 Controller IO queue size 128, less than required. 00:16:17.582 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:17.582 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:17.582 WARNING: Some requested NVMe devices were skipped 00:16:17.582 No valid NVMe controllers or AIO or URING devices found 00:16:17.840 12:19:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:16:20.380 Initializing NVMe Controllers 00:16:20.380 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:20.380 Controller IO queue size 128, less than required. 00:16:20.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:20.380 Controller IO queue size 128, less than required. 00:16:20.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:20.380 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:20.380 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:20.380 Initialization complete. Launching workers. 00:16:20.380 00:16:20.380 ==================== 00:16:20.380 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:20.380 TCP transport: 00:16:20.380 polls: 11535 00:16:20.380 idle_polls: 8935 00:16:20.380 sock_completions: 2600 00:16:20.380 nvme_completions: 5099 00:16:20.380 submitted_requests: 7646 00:16:20.380 queued_requests: 1 00:16:20.380 00:16:20.380 ==================== 00:16:20.380 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:20.380 TCP transport: 00:16:20.380 polls: 11364 00:16:20.380 idle_polls: 8605 00:16:20.380 sock_completions: 2759 00:16:20.380 nvme_completions: 5413 00:16:20.380 submitted_requests: 8070 00:16:20.380 queued_requests: 1 00:16:20.380 ======================================================== 00:16:20.380 Latency(us) 00:16:20.380 Device Information : IOPS MiB/s Average min max 00:16:20.380 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1272.63 318.16 103006.43 55106.38 161014.77 00:16:20.380 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1351.02 337.75 96143.58 42072.24 142559.00 00:16:20.380 ======================================================== 00:16:20.380 Total : 2623.65 655.91 99472.49 42072.24 161014.77 00:16:20.380 00:16:20.380 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:20.380 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:20.642 rmmod nvme_tcp 00:16:20.642 rmmod nvme_fabrics 00:16:20.642 rmmod nvme_keyring 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 86940 ']' 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 86940 00:16:20.642 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 86940 ']' 00:16:20.643 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 86940 00:16:20.643 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:16:20.643 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.643 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86940 00:16:20.643 killing process with pid 86940 00:16:20.643 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.643 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.643 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86940' 00:16:20.643 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 86940 00:16:20.643 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 86940 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.209 12:19:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:21.209 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:21.209 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:21.209 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:21.209 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:21.209 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:16:21.467 ************************************ 00:16:21.467 END TEST nvmf_perf 00:16:21.467 ************************************ 00:16:21.467 00:16:21.467 real 0m14.029s 00:16:21.467 user 0m50.410s 00:16:21.467 sys 0m3.723s 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.467 ************************************ 00:16:21.467 START TEST nvmf_fio_host 00:16:21.467 ************************************ 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:21.467 * Looking for test storage... 00:16:21.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:21.467 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:21.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.727 --rc genhtml_branch_coverage=1 00:16:21.727 --rc genhtml_function_coverage=1 00:16:21.727 --rc genhtml_legend=1 00:16:21.727 --rc geninfo_all_blocks=1 00:16:21.727 --rc geninfo_unexecuted_blocks=1 00:16:21.727 00:16:21.727 ' 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:21.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.727 --rc genhtml_branch_coverage=1 00:16:21.727 --rc genhtml_function_coverage=1 00:16:21.727 --rc genhtml_legend=1 00:16:21.727 --rc geninfo_all_blocks=1 00:16:21.727 --rc geninfo_unexecuted_blocks=1 00:16:21.727 00:16:21.727 ' 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:21.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.727 --rc genhtml_branch_coverage=1 00:16:21.727 --rc genhtml_function_coverage=1 00:16:21.727 --rc genhtml_legend=1 00:16:21.727 --rc geninfo_all_blocks=1 00:16:21.727 --rc geninfo_unexecuted_blocks=1 00:16:21.727 00:16:21.727 ' 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:21.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.727 --rc genhtml_branch_coverage=1 00:16:21.727 --rc genhtml_function_coverage=1 00:16:21.727 --rc genhtml_legend=1 00:16:21.727 --rc geninfo_all_blocks=1 00:16:21.727 --rc geninfo_unexecuted_blocks=1 00:16:21.727 00:16:21.727 ' 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.727 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:21.728 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:21.728 Cannot find device "nvmf_init_br" 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:21.728 Cannot find device "nvmf_init_br2" 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:21.728 Cannot find device "nvmf_tgt_br" 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:21.728 Cannot find device "nvmf_tgt_br2" 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:21.728 Cannot find device "nvmf_init_br" 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:21.728 Cannot find device "nvmf_init_br2" 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:21.728 Cannot find device "nvmf_tgt_br" 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:21.728 Cannot find device "nvmf_tgt_br2" 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:21.728 Cannot find device "nvmf_br" 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:21.728 Cannot find device "nvmf_init_if" 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:16:21.728 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:21.987 Cannot find device "nvmf_init_if2" 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:21.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:21.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:21.988 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:21.988 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:16:21.988 00:16:21.988 --- 10.0.0.3 ping statistics --- 00:16:21.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.988 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:21.988 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:21.988 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:21.988 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:16:22.249 00:16:22.249 --- 10.0.0.4 ping statistics --- 00:16:22.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.249 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:22.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:22.249 00:16:22.249 --- 10.0.0.1 ping statistics --- 00:16:22.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.249 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:22.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:16:22.249 00:16:22.249 --- 10.0.0.2 ping statistics --- 00:16:22.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.249 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87458 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87458 00:16:22.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 87458 ']' 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.249 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.250 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.250 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.250 12:19:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.250 [2024-12-05 12:19:52.966188] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:22.250 [2024-12-05 12:19:52.966475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.250 [2024-12-05 12:19:53.114859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.661 [2024-12-05 12:19:53.167740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.661 [2024-12-05 12:19:53.167984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.661 [2024-12-05 12:19:53.168139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.661 [2024-12-05 12:19:53.168190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.661 [2024-12-05 12:19:53.168325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.661 [2024-12-05 12:19:53.169538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.661 [2024-12-05 12:19:53.169606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.661 [2024-12-05 12:19:53.169722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.661 [2024-12-05 12:19:53.169726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.661 12:19:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.661 12:19:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:16:22.661 12:19:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:22.926 [2024-12-05 12:19:53.575832] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.926 12:19:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:22.926 12:19:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:22.926 12:19:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.926 12:19:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:23.185 Malloc1 00:16:23.185 12:19:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:23.444 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:23.703 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:23.962 [2024-12-05 12:19:54.734893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:23.962 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:24.220 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:24.221 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:16:24.221 12:19:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:24.221 12:19:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:24.221 12:19:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:24.221 12:19:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:24.221 12:19:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:16:24.478 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:24.478 fio-3.35 00:16:24.478 Starting 1 thread 00:16:27.008 00:16:27.008 test: (groupid=0, jobs=1): err= 0: pid=87576: Thu Dec 5 12:19:57 2024 00:16:27.008 read: IOPS=10.2k, BW=39.7MiB/s (41.6MB/s)(79.6MiB/2006msec) 00:16:27.008 slat (nsec): min=1754, max=362978, avg=2440.17, stdev=3302.97 00:16:27.008 clat (usec): min=3237, max=12311, avg=6600.13, stdev=608.39 00:16:27.008 lat (usec): min=3257, max=12319, avg=6602.57, stdev=608.44 00:16:27.008 clat percentiles (usec): 00:16:27.008 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6128], 00:16:27.008 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:16:27.009 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7308], 95.00th=[ 7635], 00:16:27.009 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[11207], 99.95th=[11731], 00:16:27.009 | 99.99th=[12256] 00:16:27.009 bw ( KiB/s): min=39688, max=41344, per=99.96%, avg=40624.00, stdev=710.55, samples=4 00:16:27.009 iops : min= 9922, max=10336, avg=10156.00, stdev=177.64, samples=4 00:16:27.009 write: IOPS=10.2k, BW=39.7MiB/s (41.7MB/s)(79.7MiB/2006msec); 0 zone resets 00:16:27.009 slat (nsec): min=1854, max=232282, avg=2539.14, stdev=2713.03 00:16:27.009 clat (usec): min=2312, max=11557, avg=5936.16, stdev=513.08 00:16:27.009 lat (usec): min=2326, max=11559, avg=5938.70, stdev=513.09 00:16:27.009 clat percentiles (usec): 00:16:27.009 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5538], 00:16:27.009 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:16:27.009 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6521], 95.00th=[ 6783], 00:16:27.009 | 99.00th=[ 7504], 99.50th=[ 7832], 99.90th=[ 9372], 99.95th=[11076], 00:16:27.009 | 99.99th=[11338] 00:16:27.009 bw ( KiB/s): min=39744, max=41488, per=100.00%, avg=40692.00, stdev=724.95, samples=4 00:16:27.009 iops : min= 9936, max=10372, avg=10173.00, stdev=181.24, samples=4 00:16:27.009 lat (msec) : 4=0.06%, 10=99.77%, 20=0.17% 00:16:27.009 cpu : usr=64.44%, sys=25.14%, ctx=634, majf=0, minf=7 00:16:27.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:27.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:27.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:27.009 issued rwts: total=20381,20404,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:27.009 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:27.009 00:16:27.009 Run status group 0 (all jobs): 00:16:27.009 READ: bw=39.7MiB/s (41.6MB/s), 39.7MiB/s-39.7MiB/s (41.6MB/s-41.6MB/s), io=79.6MiB (83.5MB), run=2006-2006msec 00:16:27.009 WRITE: bw=39.7MiB/s (41.7MB/s), 39.7MiB/s-39.7MiB/s (41.7MB/s-41.7MB/s), io=79.7MiB (83.6MB), run=2006-2006msec 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:27.009 12:19:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:27.009 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:27.009 fio-3.35 00:16:27.009 Starting 1 thread 00:16:29.544 00:16:29.544 test: (groupid=0, jobs=1): err= 0: pid=87623: Thu Dec 5 12:19:59 2024 00:16:29.544 read: IOPS=8718, BW=136MiB/s (143MB/s)(273MiB/2006msec) 00:16:29.544 slat (usec): min=2, max=133, avg= 3.37, stdev= 2.33 00:16:29.544 clat (usec): min=1668, max=17717, avg=8561.43, stdev=2152.02 00:16:29.544 lat (usec): min=1671, max=17722, avg=8564.80, stdev=2152.25 00:16:29.544 clat percentiles (usec): 00:16:29.544 | 1.00th=[ 4424], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6652], 00:16:29.544 | 30.00th=[ 7242], 40.00th=[ 7767], 50.00th=[ 8356], 60.00th=[ 8979], 00:16:29.544 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11338], 95.00th=[12387], 00:16:29.544 | 99.00th=[14222], 99.50th=[14877], 99.90th=[16057], 99.95th=[16450], 00:16:29.544 | 99.99th=[17433] 00:16:29.544 bw ( KiB/s): min=65632, max=81888, per=51.73%, avg=72160.00, stdev=7002.08, samples=4 00:16:29.544 iops : min= 4102, max= 5118, avg=4510.00, stdev=437.63, samples=4 00:16:29.544 write: IOPS=5211, BW=81.4MiB/s (85.4MB/s)(147MiB/1800msec); 0 zone resets 00:16:29.544 slat (usec): min=29, max=369, avg=34.41, stdev= 9.72 00:16:29.544 clat (usec): min=3200, max=18565, avg=10529.47, stdev=1956.02 00:16:29.544 lat (usec): min=3231, max=18611, avg=10563.89, stdev=1958.61 00:16:29.544 clat percentiles (usec): 00:16:29.544 | 1.00th=[ 7111], 5.00th=[ 7832], 10.00th=[ 8356], 20.00th=[ 8979], 00:16:29.544 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10683], 00:16:29.544 | 70.00th=[11207], 80.00th=[12125], 90.00th=[13173], 95.00th=[14222], 00:16:29.544 | 99.00th=[15795], 99.50th=[16581], 99.90th=[17433], 99.95th=[17695], 00:16:29.544 | 99.99th=[18482] 00:16:29.544 bw ( KiB/s): min=68736, max=84992, per=90.00%, avg=75040.00, stdev=7205.57, samples=4 00:16:29.544 iops : min= 4296, max= 5312, avg=4690.00, stdev=450.35, samples=4 00:16:29.544 lat (msec) : 2=0.02%, 4=0.39%, 10=64.67%, 20=34.92% 00:16:29.544 cpu : usr=70.37%, sys=19.35%, ctx=17, majf=0, minf=18 00:16:29.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:29.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:29.544 issued rwts: total=17489,9380,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:29.544 00:16:29.544 Run status group 0 (all jobs): 00:16:29.544 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=273MiB (287MB), run=2006-2006msec 00:16:29.544 WRITE: bw=81.4MiB/s (85.4MB/s), 81.4MiB/s-81.4MiB/s (85.4MB/s-85.4MB/s), io=147MiB (154MB), run=1800-1800msec 00:16:29.544 12:19:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:29.544 rmmod nvme_tcp 00:16:29.544 rmmod nvme_fabrics 00:16:29.544 rmmod nvme_keyring 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 87458 ']' 00:16:29.544 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 87458 00:16:29.545 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 87458 ']' 00:16:29.545 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 87458 00:16:29.545 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:16:29.545 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.545 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87458 00:16:29.545 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.545 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.545 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87458' 00:16:29.545 killing process with pid 87458 00:16:29.545 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 87458 00:16:29.545 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 87458 00:16:29.804 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:29.804 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:29.804 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:29.804 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:16:29.804 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:16:29.804 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:16:29.804 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:29.804 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:29.804 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:29.804 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:29.804 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:16:30.063 00:16:30.063 real 0m8.652s 00:16:30.063 user 0m33.995s 00:16:30.063 sys 0m2.431s 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.063 ************************************ 00:16:30.063 END TEST nvmf_fio_host 00:16:30.063 ************************************ 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:30.063 12:20:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:30.322 12:20:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.323 12:20:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.323 ************************************ 00:16:30.323 START TEST nvmf_failover 00:16:30.323 ************************************ 00:16:30.323 12:20:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:30.323 * Looking for test storage... 00:16:30.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:30.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.323 --rc genhtml_branch_coverage=1 00:16:30.323 --rc genhtml_function_coverage=1 00:16:30.323 --rc genhtml_legend=1 00:16:30.323 --rc geninfo_all_blocks=1 00:16:30.323 --rc geninfo_unexecuted_blocks=1 00:16:30.323 00:16:30.323 ' 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:30.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.323 --rc genhtml_branch_coverage=1 00:16:30.323 --rc genhtml_function_coverage=1 00:16:30.323 --rc genhtml_legend=1 00:16:30.323 --rc geninfo_all_blocks=1 00:16:30.323 --rc geninfo_unexecuted_blocks=1 00:16:30.323 00:16:30.323 ' 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:30.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.323 --rc genhtml_branch_coverage=1 00:16:30.323 --rc genhtml_function_coverage=1 00:16:30.323 --rc genhtml_legend=1 00:16:30.323 --rc geninfo_all_blocks=1 00:16:30.323 --rc geninfo_unexecuted_blocks=1 00:16:30.323 00:16:30.323 ' 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:30.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.323 --rc genhtml_branch_coverage=1 00:16:30.323 --rc genhtml_function_coverage=1 00:16:30.323 --rc genhtml_legend=1 00:16:30.323 --rc geninfo_all_blocks=1 00:16:30.323 --rc geninfo_unexecuted_blocks=1 00:16:30.323 00:16:30.323 ' 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.323 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.323 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:30.324 Cannot find device "nvmf_init_br" 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:30.324 Cannot find device "nvmf_init_br2" 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:30.324 Cannot find device "nvmf_tgt_br" 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:16:30.324 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.583 Cannot find device "nvmf_tgt_br2" 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:30.583 Cannot find device "nvmf_init_br" 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:30.583 Cannot find device "nvmf_init_br2" 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:30.583 Cannot find device "nvmf_tgt_br" 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:30.583 Cannot find device "nvmf_tgt_br2" 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:30.583 Cannot find device "nvmf_br" 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:30.583 Cannot find device "nvmf_init_if" 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:30.583 Cannot find device "nvmf_init_if2" 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:30.583 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:30.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:16:30.842 00:16:30.842 --- 10.0.0.3 ping statistics --- 00:16:30.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.842 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:30.842 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:30.842 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.126 ms 00:16:30.842 00:16:30.842 --- 10.0.0.4 ping statistics --- 00:16:30.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.842 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:30.842 00:16:30.842 --- 10.0.0.1 ping statistics --- 00:16:30.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.842 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:30.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:16:30.842 00:16:30.842 --- 10.0.0.2 ping statistics --- 00:16:30.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.842 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=87897 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 87897 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 87897 ']' 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.842 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:30.842 [2024-12-05 12:20:01.599256] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:30.842 [2024-12-05 12:20:01.599369] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.102 [2024-12-05 12:20:01.744187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:31.102 [2024-12-05 12:20:01.795365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.102 [2024-12-05 12:20:01.795437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.102 [2024-12-05 12:20:01.795448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.102 [2024-12-05 12:20:01.795456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.102 [2024-12-05 12:20:01.795462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.102 [2024-12-05 12:20:01.796805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.102 [2024-12-05 12:20:01.796888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.102 [2024-12-05 12:20:01.796894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.102 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.102 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:31.102 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:31.102 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:31.102 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:31.360 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.360 12:20:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:31.619 [2024-12-05 12:20:02.251011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.619 12:20:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:31.878 Malloc0 00:16:31.878 12:20:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:32.138 12:20:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:32.398 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:32.656 [2024-12-05 12:20:03.294302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:32.656 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:32.656 [2024-12-05 12:20:03.514456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:32.915 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:32.915 [2024-12-05 12:20:03.730813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:32.915 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:32.915 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87994 00:16:32.915 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:32.915 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87994 /var/tmp/bdevperf.sock 00:16:32.915 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 87994 ']' 00:16:32.915 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.915 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.915 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.915 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.915 12:20:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:33.483 12:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.483 12:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:33.483 12:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:33.741 NVMe0n1 00:16:33.741 12:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:34.000 00:16:34.000 12:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:34.000 12:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88024 00:16:34.000 12:20:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:34.936 12:20:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:35.195 [2024-12-05 12:20:05.981906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.981982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 [2024-12-05 12:20:05.982445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74c830 is same with the state(6) to be set 00:16:35.195 12:20:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:38.483 12:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:38.483 00:16:38.742 12:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:39.001 [2024-12-05 12:20:09.626399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 [2024-12-05 12:20:09.626732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74d5e0 is same with the state(6) to be set 00:16:39.001 12:20:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:42.287 12:20:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:42.287 [2024-12-05 12:20:12.913802] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:42.287 12:20:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:43.220 12:20:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:43.478 [2024-12-05 12:20:14.192049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.478 [2024-12-05 12:20:14.192501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.479 [2024-12-05 12:20:14.192515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.479 [2024-12-05 12:20:14.192523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.479 [2024-12-05 12:20:14.192530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.479 [2024-12-05 12:20:14.192539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.479 [2024-12-05 12:20:14.192546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897550 is same with the state(6) to be set 00:16:43.479 12:20:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 88024 00:16:50.044 { 00:16:50.044 "results": [ 00:16:50.044 { 00:16:50.044 "job": "NVMe0n1", 00:16:50.044 "core_mask": "0x1", 00:16:50.044 "workload": "verify", 00:16:50.044 "status": "finished", 00:16:50.044 "verify_range": { 00:16:50.044 "start": 0, 00:16:50.044 "length": 16384 00:16:50.044 }, 00:16:50.044 "queue_depth": 128, 00:16:50.044 "io_size": 4096, 00:16:50.044 "runtime": 15.008292, 00:16:50.044 "iops": 10626.392396949634, 00:16:50.044 "mibps": 41.509345300584506, 00:16:50.044 "io_failed": 3021, 00:16:50.044 "io_timeout": 0, 00:16:50.044 "avg_latency_us": 11795.313117011785, 00:16:50.044 "min_latency_us": 577.1636363636363, 00:16:50.044 "max_latency_us": 14477.498181818182 00:16:50.044 } 00:16:50.044 ], 00:16:50.044 "core_count": 1 00:16:50.044 } 00:16:50.044 12:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 87994 00:16:50.044 12:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 87994 ']' 00:16:50.044 12:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 87994 00:16:50.044 12:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:50.044 12:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.044 12:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87994 00:16:50.044 killing process with pid 87994 00:16:50.044 12:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.044 12:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.044 12:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87994' 00:16:50.044 12:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 87994 00:16:50.044 12:20:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 87994 00:16:50.044 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:50.044 [2024-12-05 12:20:03.792781] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:50.044 [2024-12-05 12:20:03.792870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87994 ] 00:16:50.044 [2024-12-05 12:20:03.939878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.044 [2024-12-05 12:20:03.996023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.044 Running I/O for 15 seconds... 00:16:50.044 10762.00 IOPS, 42.04 MiB/s [2024-12-05T12:20:20.913Z] [2024-12-05 12:20:05.983414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.044 [2024-12-05 12:20:05.983459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.044 [2024-12-05 12:20:05.983485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.044 [2024-12-05 12:20:05.983513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.044 [2024-12-05 12:20:05.983529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.044 [2024-12-05 12:20:05.983544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.044 [2024-12-05 12:20:05.983560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.044 [2024-12-05 12:20:05.983590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.044 [2024-12-05 12:20:05.983620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.044 [2024-12-05 12:20:05.983633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.044 [2024-12-05 12:20:05.983647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.044 [2024-12-05 12:20:05.983659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.044 [2024-12-05 12:20:05.983673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.044 [2024-12-05 12:20:05.983686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.044 [2024-12-05 12:20:05.983700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.983712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.983726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.983739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.983753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.983765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.983779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.983792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.983835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.983850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.983864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.983876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.983890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.983903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.983923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.983936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.983950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.983962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.983976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.983989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.045 [2024-12-05 12:20:05.984400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.045 [2024-12-05 12:20:05.984438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.045 [2024-12-05 12:20:05.984468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.045 [2024-12-05 12:20:05.984504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.045 [2024-12-05 12:20:05.984535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.045 [2024-12-05 12:20:05.984566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.045 [2024-12-05 12:20:05.984596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.045 [2024-12-05 12:20:05.984650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.045 [2024-12-05 12:20:05.984952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.045 [2024-12-05 12:20:05.984965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.984978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.984991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.985981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.985993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.986007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.986019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.986033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.986045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.986059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.986071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.986085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.986097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.986111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.986123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.986152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.046 [2024-12-05 12:20:05.986165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.046 [2024-12-05 12:20:05.986180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.047 [2024-12-05 12:20:05.986775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.986802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.986829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.986857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.986884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.986911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.986938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.986965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.986980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.986993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.047 [2024-12-05 12:20:05.987507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.047 [2024-12-05 12:20:05.987523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:05.987536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:05.987560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:05.987575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:05.987622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:50.048 [2024-12-05 12:20:05.987637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:50.048 [2024-12-05 12:20:05.987647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98584 len:8 PRP1 0x0 PRP2 0x0 00:16:50.048 [2024-12-05 12:20:05.987660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:05.987721] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:50.048 [2024-12-05 12:20:05.987777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.048 [2024-12-05 12:20:05.987798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:05.987813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.048 [2024-12-05 12:20:05.987826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:05.987840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.048 [2024-12-05 12:20:05.987853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:05.987867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.048 [2024-12-05 12:20:05.987879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:05.987892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:50.048 [2024-12-05 12:20:05.987932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d3f30 (9): Bad file descriptor 00:16:50.048 [2024-12-05 12:20:05.991601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:50.048 [2024-12-05 12:20:06.014898] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:16:50.048 10730.00 IOPS, 41.91 MiB/s [2024-12-05T12:20:20.917Z] 10898.00 IOPS, 42.57 MiB/s [2024-12-05T12:20:20.917Z] 10947.00 IOPS, 42.76 MiB/s [2024-12-05T12:20:20.917Z] [2024-12-05 12:20:09.627771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.048 [2024-12-05 12:20:09.627817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.627841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.627857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.627896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.627909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.627923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.627935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.627949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.627961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.627975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.627986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.048 [2024-12-05 12:20:09.628576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.048 [2024-12-05 12:20:09.628589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.628614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.628640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.628665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.628690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.628716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.628750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.628775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.628801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.049 [2024-12-05 12:20:09.628826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.049 [2024-12-05 12:20:09.628852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.049 [2024-12-05 12:20:09.628877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.049 [2024-12-05 12:20:09.628908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.049 [2024-12-05 12:20:09.628935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.049 [2024-12-05 12:20:09.628960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.049 [2024-12-05 12:20:09.628984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.628998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.049 [2024-12-05 12:20:09.629463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.049 [2024-12-05 12:20:09.629475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.629928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.629954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.629980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.629993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.050 [2024-12-05 12:20:09.630377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.630402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.630429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.630455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.630486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.630512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.050 [2024-12-05 12:20:09.630526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.050 [2024-12-05 12:20:09.630538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.630957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.630971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.631000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.631036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.631076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.631114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.631150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.631178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.631208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.631235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.631262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.631301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.631345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.051 [2024-12-05 12:20:09.631373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:50.051 [2024-12-05 12:20:09.631425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35440 len:8 PRP1 0x0 PRP2 0x0 00:16:50.051 [2024-12-05 12:20:09.631439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:50.051 [2024-12-05 12:20:09.631466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:50.051 [2024-12-05 12:20:09.631485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35448 len:8 PRP1 0x0 PRP2 0x0 00:16:50.051 [2024-12-05 12:20:09.631504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631569] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:16:50.051 [2024-12-05 12:20:09.631625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.051 [2024-12-05 12:20:09.631647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.051 [2024-12-05 12:20:09.631691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.051 [2024-12-05 12:20:09.631717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.051 [2024-12-05 12:20:09.631742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.051 [2024-12-05 12:20:09.631755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:16:50.051 [2024-12-05 12:20:09.631804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d3f30 (9): Bad file descriptor 00:16:50.051 [2024-12-05 12:20:09.635540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:16:50.051 [2024-12-05 12:20:09.657809] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:16:50.051 10905.00 IOPS, 42.60 MiB/s [2024-12-05T12:20:20.920Z] 10918.50 IOPS, 42.65 MiB/s [2024-12-05T12:20:20.920Z] 10953.14 IOPS, 42.79 MiB/s [2024-12-05T12:20:20.920Z] 10954.50 IOPS, 42.79 MiB/s [2024-12-05T12:20:20.920Z] 10964.11 IOPS, 42.83 MiB/s [2024-12-05T12:20:20.920Z] [2024-12-05 12:20:14.192801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.192845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.192870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.192887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.192903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.192916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.192932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.192946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.192960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.192974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.052 [2024-12-05 12:20:14.193732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.193758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.193785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.193810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.193843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.193870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.193897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.193924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.193951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.193977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.193991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.194003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.194017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.194029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.194043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.194056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.052 [2024-12-05 12:20:14.194069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.052 [2024-12-05 12:20:14.194082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.194983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.194996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.195011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.195024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.195046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.195061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.195103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.195120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.195135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.195151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.053 [2024-12-05 12:20:14.195166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.053 [2024-12-05 12:20:14.195180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.054 [2024-12-05 12:20:14.195778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.054 [2024-12-05 12:20:14.195805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.054 [2024-12-05 12:20:14.195833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.054 [2024-12-05 12:20:14.195860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.054 [2024-12-05 12:20:14.195894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.054 [2024-12-05 12:20:14.195922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.054 [2024-12-05 12:20:14.195961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.195976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.195989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.054 [2024-12-05 12:20:14.196414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.054 [2024-12-05 12:20:14.196428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:50.055 [2024-12-05 12:20:14.196462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:50.055 [2024-12-05 12:20:14.196927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.196967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:50.055 [2024-12-05 12:20:14.196983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:50.055 [2024-12-05 12:20:14.196994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49712 len:8 PRP1 0x0 PRP2 0x0 00:16:50.055 [2024-12-05 12:20:14.197007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.197068] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:16:50.055 [2024-12-05 12:20:14.197124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.055 [2024-12-05 12:20:14.197145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.197160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.055 [2024-12-05 12:20:14.197172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.197190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.055 [2024-12-05 12:20:14.197208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.197222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:50.055 [2024-12-05 12:20:14.197235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:50.055 [2024-12-05 12:20:14.197248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:16:50.055 [2024-12-05 12:20:14.197296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d3f30 (9): Bad file descriptor 00:16:50.055 [2024-12-05 12:20:14.201157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:16:50.055 [2024-12-05 12:20:14.222788] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:16:50.055 10865.30 IOPS, 42.44 MiB/s [2024-12-05T12:20:20.924Z] 10790.45 IOPS, 42.15 MiB/s [2024-12-05T12:20:20.924Z] 10741.83 IOPS, 41.96 MiB/s [2024-12-05T12:20:20.924Z] 10692.85 IOPS, 41.77 MiB/s [2024-12-05T12:20:20.924Z] 10662.07 IOPS, 41.65 MiB/s [2024-12-05T12:20:20.924Z] 10626.73 IOPS, 41.51 MiB/s 00:16:50.055 Latency(us) 00:16:50.055 [2024-12-05T12:20:20.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.055 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:50.055 Verification LBA range: start 0x0 length 0x4000 00:16:50.055 NVMe0n1 : 15.01 10626.39 41.51 201.29 0.00 11795.31 577.16 14477.50 00:16:50.055 [2024-12-05T12:20:20.924Z] =================================================================================================================== 00:16:50.055 [2024-12-05T12:20:20.924Z] Total : 10626.39 41.51 201.29 0.00 11795.31 577.16 14477.50 00:16:50.055 Received shutdown signal, test time was about 15.000000 seconds 00:16:50.055 00:16:50.055 Latency(us) 00:16:50.055 [2024-12-05T12:20:20.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.055 [2024-12-05T12:20:20.924Z] =================================================================================================================== 00:16:50.055 [2024-12-05T12:20:20.924Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:50.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88232 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88232 /var/tmp/bdevperf.sock 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88232 ']' 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:50.055 [2024-12-05 12:20:20.671615] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:50.055 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:50.314 [2024-12-05 12:20:20.935893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:50.314 12:20:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:50.573 NVMe0n1 00:16:50.573 12:20:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:50.832 00:16:50.832 12:20:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:51.091 00:16:51.091 12:20:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:51.091 12:20:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:51.350 12:20:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:51.609 12:20:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:54.897 12:20:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:54.897 12:20:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:54.897 12:20:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:54.897 12:20:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88351 00:16:54.897 12:20:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 88351 00:16:55.832 { 00:16:55.832 "results": [ 00:16:55.832 { 00:16:55.832 "job": "NVMe0n1", 00:16:55.832 "core_mask": "0x1", 00:16:55.832 "workload": "verify", 00:16:55.832 "status": "finished", 00:16:55.832 "verify_range": { 00:16:55.832 "start": 0, 00:16:55.832 "length": 16384 00:16:55.832 }, 00:16:55.832 "queue_depth": 128, 00:16:55.832 "io_size": 4096, 00:16:55.832 "runtime": 1.008245, 00:16:55.832 "iops": 9943.019801734697, 00:16:55.832 "mibps": 38.83992110052616, 00:16:55.832 "io_failed": 0, 00:16:55.832 "io_timeout": 0, 00:16:55.832 "avg_latency_us": 12818.10058707776, 00:16:55.832 "min_latency_us": 1802.24, 00:16:55.832 "max_latency_us": 15609.483636363637 00:16:55.832 } 00:16:55.832 ], 00:16:55.832 "core_count": 1 00:16:55.832 } 00:16:55.832 12:20:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:55.832 [2024-12-05 12:20:20.171482] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:16:55.832 [2024-12-05 12:20:20.171586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88232 ] 00:16:55.832 [2024-12-05 12:20:20.311414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.832 [2024-12-05 12:20:20.355402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.832 [2024-12-05 12:20:22.226269] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:55.832 [2024-12-05 12:20:22.226836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.832 [2024-12-05 12:20:22.226947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-12-05 12:20:22.227034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.832 [2024-12-05 12:20:22.227168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-12-05 12:20:22.227249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.832 [2024-12-05 12:20:22.227365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-12-05 12:20:22.227464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.832 [2024-12-05 12:20:22.227546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-12-05 12:20:22.227674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:16:55.832 [2024-12-05 12:20:22.227798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:16:55.832 [2024-12-05 12:20:22.227895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f9f30 (9): Bad file descriptor 00:16:55.832 [2024-12-05 12:20:22.236892] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:16:55.832 Running I/O for 1 seconds... 00:16:55.832 9897.00 IOPS, 38.66 MiB/s 00:16:55.832 Latency(us) 00:16:55.832 [2024-12-05T12:20:26.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.832 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:55.832 Verification LBA range: start 0x0 length 0x4000 00:16:55.832 NVMe0n1 : 1.01 9943.02 38.84 0.00 0.00 12818.10 1802.24 15609.48 00:16:55.832 [2024-12-05T12:20:26.701Z] =================================================================================================================== 00:16:55.832 [2024-12-05T12:20:26.701Z] Total : 9943.02 38.84 0.00 0.00 12818.10 1802.24 15609.48 00:16:55.832 12:20:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:55.832 12:20:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:56.090 12:20:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:56.349 12:20:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:56.349 12:20:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:56.608 12:20:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:57.175 12:20:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:00.520 12:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:00.520 12:20:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 88232 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88232 ']' 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88232 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88232 00:17:00.520 killing process with pid 88232 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88232' 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88232 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88232 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:00.520 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.779 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:00.779 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:00.779 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:00.779 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:00.779 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:17:00.779 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:00.779 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:17:00.779 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:00.779 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:00.779 rmmod nvme_tcp 00:17:00.779 rmmod nvme_fabrics 00:17:01.038 rmmod nvme_keyring 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 87897 ']' 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 87897 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 87897 ']' 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 87897 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87897 00:17:01.038 killing process with pid 87897 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87897' 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 87897 00:17:01.038 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 87897 00:17:01.298 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.298 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.298 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.298 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:17:01.298 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:17:01.298 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.298 12:20:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:01.298 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:01.556 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.556 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.556 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:01.556 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.556 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.556 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.556 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:17:01.556 00:17:01.556 real 0m31.311s 00:17:01.556 user 2m0.593s 00:17:01.556 sys 0m4.656s 00:17:01.556 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:01.556 ************************************ 00:17:01.556 END TEST nvmf_failover 00:17:01.556 ************************************ 00:17:01.556 12:20:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:01.556 12:20:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:01.556 12:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:01.556 12:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:01.557 12:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.557 ************************************ 00:17:01.557 START TEST nvmf_host_discovery 00:17:01.557 ************************************ 00:17:01.557 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:01.557 * Looking for test storage... 00:17:01.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:01.557 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:01.557 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:01.557 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:17:01.817 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:01.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.818 --rc genhtml_branch_coverage=1 00:17:01.818 --rc genhtml_function_coverage=1 00:17:01.818 --rc genhtml_legend=1 00:17:01.818 --rc geninfo_all_blocks=1 00:17:01.818 --rc geninfo_unexecuted_blocks=1 00:17:01.818 00:17:01.818 ' 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:01.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.818 --rc genhtml_branch_coverage=1 00:17:01.818 --rc genhtml_function_coverage=1 00:17:01.818 --rc genhtml_legend=1 00:17:01.818 --rc geninfo_all_blocks=1 00:17:01.818 --rc geninfo_unexecuted_blocks=1 00:17:01.818 00:17:01.818 ' 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:01.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.818 --rc genhtml_branch_coverage=1 00:17:01.818 --rc genhtml_function_coverage=1 00:17:01.818 --rc genhtml_legend=1 00:17:01.818 --rc geninfo_all_blocks=1 00:17:01.818 --rc geninfo_unexecuted_blocks=1 00:17:01.818 00:17:01.818 ' 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:01.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:01.818 --rc genhtml_branch_coverage=1 00:17:01.818 --rc genhtml_function_coverage=1 00:17:01.818 --rc genhtml_legend=1 00:17:01.818 --rc geninfo_all_blocks=1 00:17:01.818 --rc geninfo_unexecuted_blocks=1 00:17:01.818 00:17:01.818 ' 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:01.818 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.818 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:01.819 Cannot find device "nvmf_init_br" 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:01.819 Cannot find device "nvmf_init_br2" 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:01.819 Cannot find device "nvmf_tgt_br" 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.819 Cannot find device "nvmf_tgt_br2" 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:01.819 Cannot find device "nvmf_init_br" 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:01.819 Cannot find device "nvmf_init_br2" 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:01.819 Cannot find device "nvmf_tgt_br" 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:01.819 Cannot find device "nvmf_tgt_br2" 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:01.819 Cannot find device "nvmf_br" 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:01.819 Cannot find device "nvmf_init_if" 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:01.819 Cannot find device "nvmf_init_if2" 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:01.819 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:02.079 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:02.079 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:17:02.079 00:17:02.079 --- 10.0.0.3 ping statistics --- 00:17:02.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.079 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:02.079 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:02.079 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:17:02.079 00:17:02.079 --- 10.0.0.4 ping statistics --- 00:17:02.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.079 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:02.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:02.079 00:17:02.079 --- 10.0.0.1 ping statistics --- 00:17:02.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.079 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:02.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:17:02.079 00:17:02.079 --- 10.0.0.2 ping statistics --- 00:17:02.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.079 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=88714 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 88714 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 88714 ']' 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.079 12:20:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.338 [2024-12-05 12:20:32.993369] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:02.338 [2024-12-05 12:20:32.993464] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.338 [2024-12-05 12:20:33.147299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.338 [2024-12-05 12:20:33.204465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.338 [2024-12-05 12:20:33.204520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.338 [2024-12-05 12:20:33.204535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.338 [2024-12-05 12:20:33.204547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.338 [2024-12-05 12:20:33.204556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.338 [2024-12-05 12:20:33.205020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.275 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.275 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:03.275 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:03.275 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:03.275 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.275 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.275 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:03.275 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.275 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.276 [2024-12-05 12:20:34.058762] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.276 [2024-12-05 12:20:34.066872] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.276 null0 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.276 null1 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.276 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88770 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88770 /tmp/host.sock 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 88770 ']' 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.276 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.535 [2024-12-05 12:20:34.159771] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:03.535 [2024-12-05 12:20:34.159882] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88770 ] 00:17:03.535 [2024-12-05 12:20:34.314325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.535 [2024-12-05 12:20:34.370664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.794 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.053 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.054 [2024-12-05 12:20:34.858957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.054 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:04.313 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:04.314 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:04.314 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.314 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.314 12:20:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:17:04.314 12:20:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:17:04.878 [2024-12-05 12:20:35.522045] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:04.878 [2024-12-05 12:20:35.522075] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:04.878 [2024-12-05 12:20:35.522117] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:04.878 [2024-12-05 12:20:35.608148] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:04.878 [2024-12-05 12:20:35.662501] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:04.878 [2024-12-05 12:20:35.663373] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x183c860:1 started. 00:17:04.878 [2024-12-05 12:20:35.665088] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:04.878 [2024-12-05 12:20:35.665115] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:04.878 [2024-12-05 12:20:35.670313] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x183c860 was disconnected and freed. delete nvme_qpair. 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.444 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.702 [2024-12-05 12:20:36.323896] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x183ce10:1 started. 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.702 [2024-12-05 12:20:36.330365] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x183ce10 was disconnected and freed. delete nvme_qpair. 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.702 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.702 [2024-12-05 12:20:36.439431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:05.702 [2024-12-05 12:20:36.439934] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:05.703 [2024-12-05 12:20:36.439957] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.703 [2024-12-05 12:20:36.526013] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.703 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:05.961 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.961 [2024-12-05 12:20:36.585461] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:17:05.961 [2024-12-05 12:20:36.585531] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:05.961 [2024-12-05 12:20:36.585545] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:05.961 [2024-12-05 12:20:36.585551] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:05.961 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:17:05.961 12:20:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:06.895 [2024-12-05 12:20:37.727933] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:17:06.895 [2024-12-05 12:20:37.727984] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:06.895 [2024-12-05 12:20:37.732723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.895 [2024-12-05 12:20:37.732755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.895 [2024-12-05 12:20:37.732768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.895 [2024-12-05 12:20:37.732777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.895 [2024-12-05 12:20:37.732787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.895 [2024-12-05 12:20:37.732796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.895 [2024-12-05 12:20:37.732805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.895 [2024-12-05 12:20:37.732814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.895 [2024-12-05 12:20:37.732822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b75c0 is same with the state(6) to be set 00:17:06.895 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:06.896 [2024-12-05 12:20:37.742695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b75c0 (9): Bad file descriptor 00:17:06.896 [2024-12-05 12:20:37.752717] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:06.896 [2024-12-05 12:20:37.752763] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:06.896 [2024-12-05 12:20:37.752772] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:06.896 [2024-12-05 12:20:37.752778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:06.896 [2024-12-05 12:20:37.752811] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:06.896 [2024-12-05 12:20:37.752893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:06.896 [2024-12-05 12:20:37.752965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b75c0 with addr=10.0.0.3, port=4420 00:17:06.896 [2024-12-05 12:20:37.752978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b75c0 is same with the state(6) to be set 00:17:06.896 [2024-12-05 12:20:37.752997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b75c0 (9): Bad file descriptor 00:17:06.896 [2024-12-05 12:20:37.753026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:06.896 [2024-12-05 12:20:37.753039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:06.896 [2024-12-05 12:20:37.753051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:06.896 [2024-12-05 12:20:37.753068] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:06.896 [2024-12-05 12:20:37.753077] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:06.896 [2024-12-05 12:20:37.753083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:06.896 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.156 [2024-12-05 12:20:37.762818] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:07.156 [2024-12-05 12:20:37.762861] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:07.156 [2024-12-05 12:20:37.762869] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:07.156 [2024-12-05 12:20:37.762874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:07.156 [2024-12-05 12:20:37.762908] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:07.156 [2024-12-05 12:20:37.762964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:07.156 [2024-12-05 12:20:37.762986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b75c0 with addr=10.0.0.3, port=4420 00:17:07.156 [2024-12-05 12:20:37.762998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b75c0 is same with the state(6) to be set 00:17:07.156 [2024-12-05 12:20:37.763014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b75c0 (9): Bad file descriptor 00:17:07.156 [2024-12-05 12:20:37.763040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:07.156 [2024-12-05 12:20:37.763052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:07.156 [2024-12-05 12:20:37.763062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:07.156 [2024-12-05 12:20:37.763071] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:07.156 [2024-12-05 12:20:37.763077] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:07.156 [2024-12-05 12:20:37.763082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:07.156 [2024-12-05 12:20:37.772919] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:07.156 [2024-12-05 12:20:37.772963] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:07.156 [2024-12-05 12:20:37.772971] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:07.156 [2024-12-05 12:20:37.772976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:07.156 [2024-12-05 12:20:37.773125] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:07.156 [2024-12-05 12:20:37.773232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:07.156 [2024-12-05 12:20:37.773408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b75c0 with addr=10.0.0.3, port=4420 00:17:07.156 [2024-12-05 12:20:37.773532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b75c0 is same with the state(6) to be set 00:17:07.156 [2024-12-05 12:20:37.773725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b75c0 (9): Bad file descriptor 00:17:07.156 [2024-12-05 12:20:37.773854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:07.156 [2024-12-05 12:20:37.773952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:07.156 [2024-12-05 12:20:37.774041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:07.156 [2024-12-05 12:20:37.774114] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:07.156 [2024-12-05 12:20:37.774133] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:07.156 [2024-12-05 12:20:37.774139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:07.156 [2024-12-05 12:20:37.783145] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:07.156 [2024-12-05 12:20:37.783175] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:07.156 [2024-12-05 12:20:37.783183] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:07.156 [2024-12-05 12:20:37.783188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:07.156 [2024-12-05 12:20:37.783309] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:07.156 [2024-12-05 12:20:37.783413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:07.156 [2024-12-05 12:20:37.783530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b75c0 with addr=10.0.0.3, port=4420 00:17:07.156 [2024-12-05 12:20:37.783638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b75c0 is same with the state(6) to be set 00:17:07.156 [2024-12-05 12:20:37.783759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b75c0 (9): Bad file descriptor 00:17:07.156 [2024-12-05 12:20:37.783869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:07.156 [2024-12-05 12:20:37.783946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:07.156 [2024-12-05 12:20:37.784022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:07.156 [2024-12-05 12:20:37.784096] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:07.156 [2024-12-05 12:20:37.784108] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:07.156 [2024-12-05 12:20:37.784113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:07.156 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.156 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:07.156 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:07.156 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:07.156 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:07.156 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:07.156 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:07.156 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:07.156 [2024-12-05 12:20:37.793321] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:07.157 [2024-12-05 12:20:37.793360] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:07.157 [2024-12-05 12:20:37.793370] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:07.157 [2024-12-05 12:20:37.793375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:07.157 [2024-12-05 12:20:37.793485] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:07.157 [2024-12-05 12:20:37.793569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:07.157 [2024-12-05 12:20:37.793690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b75c0 with addr=10.0.0.3, port=4420 00:17:07.157 [2024-12-05 12:20:37.793792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b75c0 is same with the state(6) to be set 00:17:07.157 [2024-12-05 12:20:37.793935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b75c0 (9): Bad file descriptor 00:17:07.157 [2024-12-05 12:20:37.794061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:07.157 [2024-12-05 12:20:37.794161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:07.157 [2024-12-05 12:20:37.794238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:07.157 [2024-12-05 12:20:37.794338] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:07.157 [2024-12-05 12:20:37.794352] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:07.157 [2024-12-05 12:20:37.794358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:07.157 [2024-12-05 12:20:37.803496] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:07.157 [2024-12-05 12:20:37.803528] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:07.157 [2024-12-05 12:20:37.803537] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:07.157 [2024-12-05 12:20:37.803542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:07.157 [2024-12-05 12:20:37.803727] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:07.157 [2024-12-05 12:20:37.803812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:07.157 [2024-12-05 12:20:37.803931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b75c0 with addr=10.0.0.3, port=4420 00:17:07.157 [2024-12-05 12:20:37.804029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b75c0 is same with the state(6) to be set 00:17:07.157 [2024-12-05 12:20:37.804138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b75c0 (9): Bad file descriptor 00:17:07.157 [2024-12-05 12:20:37.804279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:07.157 [2024-12-05 12:20:37.804397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:07.157 [2024-12-05 12:20:37.804502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:07.157 [2024-12-05 12:20:37.804612] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:07.157 [2024-12-05 12:20:37.804631] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:07.157 [2024-12-05 12:20:37.804638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:07.157 [2024-12-05 12:20:37.813737] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:07.157 [2024-12-05 12:20:37.813781] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:07.157 [2024-12-05 12:20:37.813790] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:07.157 [2024-12-05 12:20:37.813795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:07.157 [2024-12-05 12:20:37.813899] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:07.157 [2024-12-05 12:20:37.814014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:07.157 [2024-12-05 12:20:37.814116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b75c0 with addr=10.0.0.3, port=4420 00:17:07.157 [2024-12-05 12:20:37.814218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b75c0 is same with the state(6) to be set 00:17:07.157 [2024-12-05 12:20:37.814342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b75c0 (9): Bad file descriptor 00:17:07.157 [2024-12-05 12:20:37.814492] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:17:07.157 [2024-12-05 12:20:37.814530] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:07.157 [2024-12-05 12:20:37.814563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:07.157 [2024-12-05 12:20:37.814673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:07.157 [2024-12-05 12:20:37.814767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:07.157 [2024-12-05 12:20:37.814843] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:07.157 [2024-12-05 12:20:37.814870] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:07.157 [2024-12-05 12:20:37.814877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:07.157 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:07.158 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:17:07.158 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:07.158 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:07.158 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:07.158 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.158 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.158 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:07.158 12:20:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.417 12:20:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.353 [2024-12-05 12:20:39.142185] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:08.353 [2024-12-05 12:20:39.142212] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:08.353 [2024-12-05 12:20:39.142231] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:08.613 [2024-12-05 12:20:39.228904] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:17:08.613 [2024-12-05 12:20:39.287189] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:17:08.613 [2024-12-05 12:20:39.287793] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x183c300:1 started. 00:17:08.613 [2024-12-05 12:20:39.289793] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:08.613 [2024-12-05 12:20:39.289849] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:17:08.613 [2024-12-05 12:20:39.291154] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x183c300 was disconnected and freed. delete nvme_qpair. 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.613 2024/12/05 12:20:39 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:08.613 request: 00:17:08.613 { 00:17:08.613 "method": "bdev_nvme_start_discovery", 00:17:08.613 "params": { 00:17:08.613 "name": "nvme", 00:17:08.613 "trtype": "tcp", 00:17:08.613 "traddr": "10.0.0.3", 00:17:08.613 "adrfam": "ipv4", 00:17:08.613 "trsvcid": "8009", 00:17:08.613 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:08.613 "wait_for_attach": true 00:17:08.613 } 00:17:08.613 } 00:17:08.613 Got JSON-RPC error response 00:17:08.613 GoRPCClient: error on JSON-RPC call 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:08.613 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.614 2024/12/05 12:20:39 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:08.614 request: 00:17:08.614 { 00:17:08.614 "method": "bdev_nvme_start_discovery", 00:17:08.614 "params": { 00:17:08.614 "name": "nvme_second", 00:17:08.614 "trtype": "tcp", 00:17:08.614 "traddr": "10.0.0.3", 00:17:08.614 "adrfam": "ipv4", 00:17:08.614 "trsvcid": "8009", 00:17:08.614 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:08.614 "wait_for_attach": true 00:17:08.614 } 00:17:08.614 } 00:17:08.614 Got JSON-RPC error response 00:17:08.614 GoRPCClient: error on JSON-RPC call 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:08.614 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.873 12:20:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.810 [2024-12-05 12:20:40.566102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:09.810 [2024-12-05 12:20:40.566167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1848810 with addr=10.0.0.3, port=8010 00:17:09.810 [2024-12-05 12:20:40.566191] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:09.810 [2024-12-05 12:20:40.566201] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:09.810 [2024-12-05 12:20:40.566211] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:10.755 [2024-12-05 12:20:41.566063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:10.755 [2024-12-05 12:20:41.566118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1841250 with addr=10.0.0.3, port=8010 00:17:10.755 [2024-12-05 12:20:41.566135] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:10.755 [2024-12-05 12:20:41.566144] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:10.755 [2024-12-05 12:20:41.566153] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:17:12.129 [2024-12-05 12:20:42.565995] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:17:12.129 2024/12/05 12:20:42 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:17:12.129 request: 00:17:12.129 { 00:17:12.129 "method": "bdev_nvme_start_discovery", 00:17:12.129 "params": { 00:17:12.129 "name": "nvme_second", 00:17:12.129 "trtype": "tcp", 00:17:12.129 "traddr": "10.0.0.3", 00:17:12.129 "adrfam": "ipv4", 00:17:12.129 "trsvcid": "8010", 00:17:12.129 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:12.129 "wait_for_attach": false, 00:17:12.129 "attach_timeout_ms": 3000 00:17:12.129 } 00:17:12.129 } 00:17:12.129 Got JSON-RPC error response 00:17:12.129 GoRPCClient: error on JSON-RPC call 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88770 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:12.129 rmmod nvme_tcp 00:17:12.129 rmmod nvme_fabrics 00:17:12.129 rmmod nvme_keyring 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 88714 ']' 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 88714 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 88714 ']' 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 88714 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88714 00:17:12.129 killing process with pid 88714 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88714' 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 88714 00:17:12.129 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 88714 00:17:12.388 12:20:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.388 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.389 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:17:12.389 00:17:12.389 real 0m10.938s 00:17:12.389 user 0m20.719s 00:17:12.389 sys 0m1.711s 00:17:12.389 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.389 ************************************ 00:17:12.389 END TEST nvmf_host_discovery 00:17:12.389 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.389 ************************************ 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.649 ************************************ 00:17:12.649 START TEST nvmf_host_multipath_status 00:17:12.649 ************************************ 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:12.649 * Looking for test storage... 00:17:12.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:12.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.649 --rc genhtml_branch_coverage=1 00:17:12.649 --rc genhtml_function_coverage=1 00:17:12.649 --rc genhtml_legend=1 00:17:12.649 --rc geninfo_all_blocks=1 00:17:12.649 --rc geninfo_unexecuted_blocks=1 00:17:12.649 00:17:12.649 ' 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:12.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.649 --rc genhtml_branch_coverage=1 00:17:12.649 --rc genhtml_function_coverage=1 00:17:12.649 --rc genhtml_legend=1 00:17:12.649 --rc geninfo_all_blocks=1 00:17:12.649 --rc geninfo_unexecuted_blocks=1 00:17:12.649 00:17:12.649 ' 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:12.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.649 --rc genhtml_branch_coverage=1 00:17:12.649 --rc genhtml_function_coverage=1 00:17:12.649 --rc genhtml_legend=1 00:17:12.649 --rc geninfo_all_blocks=1 00:17:12.649 --rc geninfo_unexecuted_blocks=1 00:17:12.649 00:17:12.649 ' 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:12.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.649 --rc genhtml_branch_coverage=1 00:17:12.649 --rc genhtml_function_coverage=1 00:17:12.649 --rc genhtml_legend=1 00:17:12.649 --rc geninfo_all_blocks=1 00:17:12.649 --rc geninfo_unexecuted_blocks=1 00:17:12.649 00:17:12.649 ' 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.649 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.650 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:12.650 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:12.910 Cannot find device "nvmf_init_br" 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:12.910 Cannot find device "nvmf_init_br2" 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:12.910 Cannot find device "nvmf_tgt_br" 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.910 Cannot find device "nvmf_tgt_br2" 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:12.910 Cannot find device "nvmf_init_br" 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:12.910 Cannot find device "nvmf_init_br2" 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:12.910 Cannot find device "nvmf_tgt_br" 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:12.910 Cannot find device "nvmf_tgt_br2" 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:12.910 Cannot find device "nvmf_br" 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:12.910 Cannot find device "nvmf_init_if" 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:12.910 Cannot find device "nvmf_init_if2" 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:12.910 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:12.911 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:12.911 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:12.911 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:13.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:13.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:17:13.170 00:17:13.170 --- 10.0.0.3 ping statistics --- 00:17:13.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.170 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:13.170 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:13.170 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:17:13.170 00:17:13.170 --- 10.0.0.4 ping statistics --- 00:17:13.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.170 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:13.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:17:13.170 00:17:13.170 --- 10.0.0.1 ping statistics --- 00:17:13.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.170 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:13.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:17:13.170 00:17:13.170 --- 10.0.0.2 ping statistics --- 00:17:13.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.170 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=89292 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 89292 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89292 ']' 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.170 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:13.171 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.171 12:20:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:13.429 [2024-12-05 12:20:44.051255] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:13.429 [2024-12-05 12:20:44.051359] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.429 [2024-12-05 12:20:44.202006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:13.429 [2024-12-05 12:20:44.252963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.429 [2024-12-05 12:20:44.253025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.429 [2024-12-05 12:20:44.253035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.429 [2024-12-05 12:20:44.253043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.429 [2024-12-05 12:20:44.253049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.429 [2024-12-05 12:20:44.254400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.429 [2024-12-05 12:20:44.254407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.688 12:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.688 12:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:13.688 12:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:13.688 12:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:13.688 12:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:13.688 12:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.688 12:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89292 00:17:13.688 12:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:13.945 [2024-12-05 12:20:44.749188] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.946 12:20:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:14.204 Malloc0 00:17:14.469 12:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:14.728 12:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:14.985 12:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:14.985 [2024-12-05 12:20:45.790823] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:14.985 12:20:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:15.242 [2024-12-05 12:20:46.075090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:15.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:15.242 12:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89381 00:17:15.242 12:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:15.242 12:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89381 /var/tmp/bdevperf.sock 00:17:15.242 12:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:15.242 12:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89381 ']' 00:17:15.242 12:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:15.242 12:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.242 12:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:15.242 12:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.242 12:20:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:16.615 12:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.615 12:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:17:16.615 12:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:16.615 12:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:16.872 Nvme0n1 00:17:16.872 12:20:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:17.439 Nvme0n1 00:17:17.439 12:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:17.439 12:20:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:19.376 12:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:19.376 12:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:19.636 12:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:19.894 12:20:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:20.829 12:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:20.829 12:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:20.829 12:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.829 12:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:21.087 12:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:21.087 12:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:21.087 12:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:21.087 12:20:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:21.345 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:21.345 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:21.345 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:21.345 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:21.603 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:21.603 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:21.603 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:21.603 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:21.860 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:21.860 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:21.860 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:21.860 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:22.118 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:22.118 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:22.118 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:22.118 12:20:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:22.376 12:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:22.376 12:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:22.376 12:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:22.634 12:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:22.893 12:20:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:23.829 12:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:23.829 12:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:23.829 12:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:23.829 12:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:24.089 12:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:24.089 12:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:24.089 12:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:24.089 12:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.347 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.347 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:24.347 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.347 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:24.913 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.913 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:24.913 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.913 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:24.913 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.913 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:24.913 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:24.913 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.172 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:25.172 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:25.172 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.172 12:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:25.431 12:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:25.431 12:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:25.431 12:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:25.690 12:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:25.948 12:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:26.884 12:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:26.884 12:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:26.884 12:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:26.884 12:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:27.142 12:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.142 12:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:27.142 12:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:27.142 12:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.401 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:27.401 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:27.401 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:27.401 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.674 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.674 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:27.674 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.674 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:27.980 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.980 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:27.980 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:27.980 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.251 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:28.251 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:28.251 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.251 12:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:28.520 12:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:28.521 12:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:28.521 12:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:28.779 12:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:29.037 12:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:29.973 12:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:29.973 12:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:29.973 12:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:29.973 12:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:30.540 12:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:30.540 12:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:30.540 12:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:30.540 12:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:30.540 12:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:30.540 12:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:30.540 12:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:30.540 12:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:31.107 12:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:31.107 12:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:31.107 12:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:31.107 12:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:31.366 12:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:31.366 12:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:31.366 12:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:31.366 12:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:31.624 12:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:31.624 12:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:31.624 12:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:31.624 12:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:31.883 12:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:31.883 12:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:31.883 12:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:32.141 12:21:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:32.401 12:21:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:33.338 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:33.338 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:33.338 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.338 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:33.597 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:33.597 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:33.597 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.597 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:33.855 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:33.855 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:33.855 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.855 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:34.114 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.114 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:34.114 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.114 12:21:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:34.373 12:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:34.373 12:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:34.373 12:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.373 12:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:34.632 12:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:34.632 12:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:34.632 12:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.632 12:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:34.889 12:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:34.889 12:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:34.889 12:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:35.147 12:21:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:35.406 12:21:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:36.340 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:36.340 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:36.340 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.340 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:36.599 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:36.599 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:36.599 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.599 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:37.165 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.165 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:37.165 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.165 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:37.165 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.165 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:37.165 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.165 12:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:37.424 12:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.424 12:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:37.424 12:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:37.424 12:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.682 12:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:37.682 12:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:37.682 12:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.682 12:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:37.941 12:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.941 12:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:38.199 12:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:38.199 12:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:38.457 12:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:38.457 12:21:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:39.833 12:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:39.833 12:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:39.833 12:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.833 12:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:39.833 12:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:39.833 12:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:39.833 12:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.833 12:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:40.091 12:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.091 12:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:40.091 12:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.091 12:21:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:40.350 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.350 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:40.350 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:40.350 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.608 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.608 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:40.608 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.608 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:40.866 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.866 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:40.866 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.866 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:41.124 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:41.124 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:41.124 12:21:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:41.382 12:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:41.640 12:21:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:42.574 12:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:42.574 12:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:42.574 12:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.574 12:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:42.832 12:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:42.832 12:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:42.832 12:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:42.832 12:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.090 12:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.090 12:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:43.090 12:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:43.090 12:21:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.350 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.350 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:43.350 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.350 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:43.608 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.608 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:43.608 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.609 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:43.868 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.868 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:43.868 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.868 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:44.126 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.127 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:44.127 12:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:44.692 12:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:44.692 12:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:45.628 12:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:45.628 12:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:45.628 12:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:45.628 12:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:46.194 12:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.194 12:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:46.194 12:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.194 12:21:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:46.452 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.452 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:46.452 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:46.452 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.452 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.452 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:46.452 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.452 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:46.710 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.710 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:46.710 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.710 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:46.969 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.969 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:46.969 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.969 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:47.227 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.227 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:47.227 12:21:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:47.488 12:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:47.745 12:21:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:48.679 12:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:48.679 12:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:48.679 12:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.679 12:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:48.936 12:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:48.936 12:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:48.936 12:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:48.936 12:21:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.501 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:49.501 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:49.501 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.501 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:49.501 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:49.501 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:49.501 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:49.502 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.760 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:49.760 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:49.760 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.760 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:50.018 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.018 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:50.018 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:50.018 12:21:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.276 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:50.276 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89381 00:17:50.276 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89381 ']' 00:17:50.276 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89381 00:17:50.276 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:50.276 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.276 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89381 00:17:50.535 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:50.535 killing process with pid 89381 00:17:50.535 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:50.535 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89381' 00:17:50.535 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89381 00:17:50.535 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89381 00:17:50.535 { 00:17:50.535 "results": [ 00:17:50.535 { 00:17:50.535 "job": "Nvme0n1", 00:17:50.535 "core_mask": "0x4", 00:17:50.535 "workload": "verify", 00:17:50.535 "status": "terminated", 00:17:50.535 "verify_range": { 00:17:50.535 "start": 0, 00:17:50.535 "length": 16384 00:17:50.535 }, 00:17:50.535 "queue_depth": 128, 00:17:50.535 "io_size": 4096, 00:17:50.535 "runtime": 33.0185, 00:17:50.535 "iops": 8922.876569196056, 00:17:50.535 "mibps": 34.854986598422094, 00:17:50.535 "io_failed": 0, 00:17:50.535 "io_timeout": 0, 00:17:50.535 "avg_latency_us": 14321.70939216618, 00:17:50.535 "min_latency_us": 610.6763636363636, 00:17:50.535 "max_latency_us": 4026531.84 00:17:50.535 } 00:17:50.535 ], 00:17:50.535 "core_count": 1 00:17:50.535 } 00:17:50.796 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89381 00:17:50.796 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:50.796 [2024-12-05 12:20:46.141017] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:50.796 [2024-12-05 12:20:46.141092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89381 ] 00:17:50.796 [2024-12-05 12:20:46.274166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.796 [2024-12-05 12:20:46.324879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.796 Running I/O for 90 seconds... 00:17:50.796 10365.00 IOPS, 40.49 MiB/s [2024-12-05T12:21:21.665Z] 10324.50 IOPS, 40.33 MiB/s [2024-12-05T12:21:21.665Z] 10297.67 IOPS, 40.23 MiB/s [2024-12-05T12:21:21.665Z] 10269.25 IOPS, 40.11 MiB/s [2024-12-05T12:21:21.665Z] 10210.60 IOPS, 39.89 MiB/s [2024-12-05T12:21:21.665Z] 10165.83 IOPS, 39.71 MiB/s [2024-12-05T12:21:21.665Z] 10168.14 IOPS, 39.72 MiB/s [2024-12-05T12:21:21.665Z] 10169.25 IOPS, 39.72 MiB/s [2024-12-05T12:21:21.665Z] 10168.89 IOPS, 39.72 MiB/s [2024-12-05T12:21:21.665Z] 10165.50 IOPS, 39.71 MiB/s [2024-12-05T12:21:21.665Z] 10142.18 IOPS, 39.62 MiB/s [2024-12-05T12:21:21.665Z] 10127.83 IOPS, 39.56 MiB/s [2024-12-05T12:21:21.665Z] 10132.38 IOPS, 39.58 MiB/s [2024-12-05T12:21:21.665Z] 10108.64 IOPS, 39.49 MiB/s [2024-12-05T12:21:21.665Z] [2024-12-05 12:21:02.779853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.796 [2024-12-05 12:21:02.779918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:50.796 [2024-12-05 12:21:02.779969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.796 [2024-12-05 12:21:02.779987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:50.796 [2024-12-05 12:21:02.780007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.796 [2024-12-05 12:21:02.780020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.796 [2024-12-05 12:21:02.780038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.780982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.780995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.781103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.781125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.781149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.781174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.781195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.781210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.781230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.781243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.781262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.781288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.781314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.781328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.781347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.781361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.781388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.797 [2024-12-05 12:21:02.781402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.781421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.797 [2024-12-05 12:21:02.781434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:50.797 [2024-12-05 12:21:02.781452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.797 [2024-12-05 12:21:02.781465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.781974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.781987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.782013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.782027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.782046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.782059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.782078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.782091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.782109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.782122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.782141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.782154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.782173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.782186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:50.798 [2024-12-05 12:21:02.783884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.798 [2024-12-05 12:21:02.783897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.783919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.783931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.783954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.783977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:02.784475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:02.784488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:50.799 9818.40 IOPS, 38.35 MiB/s [2024-12-05T12:21:21.668Z] 9204.75 IOPS, 35.96 MiB/s [2024-12-05T12:21:21.668Z] 8663.29 IOPS, 33.84 MiB/s [2024-12-05T12:21:21.668Z] 8182.00 IOPS, 31.96 MiB/s [2024-12-05T12:21:21.668Z] 7949.37 IOPS, 31.05 MiB/s [2024-12-05T12:21:21.668Z] 8060.90 IOPS, 31.49 MiB/s [2024-12-05T12:21:21.668Z] 8159.95 IOPS, 31.87 MiB/s [2024-12-05T12:21:21.668Z] 8272.86 IOPS, 32.32 MiB/s [2024-12-05T12:21:21.668Z] 8369.57 IOPS, 32.69 MiB/s [2024-12-05T12:21:21.668Z] 8458.33 IOPS, 33.04 MiB/s [2024-12-05T12:21:21.668Z] 8513.88 IOPS, 33.26 MiB/s [2024-12-05T12:21:21.668Z] 8561.50 IOPS, 33.44 MiB/s [2024-12-05T12:21:21.668Z] 8614.44 IOPS, 33.65 MiB/s [2024-12-05T12:21:21.668Z] 8681.25 IOPS, 33.91 MiB/s [2024-12-05T12:21:21.668Z] 8746.72 IOPS, 34.17 MiB/s [2024-12-05T12:21:21.668Z] 8800.77 IOPS, 34.38 MiB/s [2024-12-05T12:21:21.668Z] [2024-12-05 12:21:18.455850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.455916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.455957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.455975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.455994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:18.456008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.456027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:18.456040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.456057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:18.456069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.456087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:18.456100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.456118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.799 [2024-12-05 12:21:18.456131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.456942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.456967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.457021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.457038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.457055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.457067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.457084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.457096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.457113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.457125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.457142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.457154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.457171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.457184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.457201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.457213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.457230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.457243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.457260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.457272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.457316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.457329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.457346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.457360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:50.799 [2024-12-05 12:21:18.457379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.799 [2024-12-05 12:21:18.457393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:50.800 [2024-12-05 12:21:18.457410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-12-05 12:21:18.457434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:50.800 8853.74 IOPS, 34.58 MiB/s [2024-12-05T12:21:21.669Z] 8891.16 IOPS, 34.73 MiB/s [2024-12-05T12:21:21.669Z] 8924.00 IOPS, 34.86 MiB/s [2024-12-05T12:21:21.669Z] Received shutdown signal, test time was about 33.019135 seconds 00:17:50.800 00:17:50.800 Latency(us) 00:17:50.800 [2024-12-05T12:21:21.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.800 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:50.800 Verification LBA range: start 0x0 length 0x4000 00:17:50.800 Nvme0n1 : 33.02 8922.88 34.85 0.00 0.00 14321.71 610.68 4026531.84 00:17:50.800 [2024-12-05T12:21:21.669Z] =================================================================================================================== 00:17:50.800 [2024-12-05T12:21:21.669Z] Total : 8922.88 34.85 0.00 0.00 14321.71 610.68 4026531.84 00:17:50.800 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.800 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:50.800 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:50.800 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:50.800 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:50.800 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.059 rmmod nvme_tcp 00:17:51.059 rmmod nvme_fabrics 00:17:51.059 rmmod nvme_keyring 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 89292 ']' 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 89292 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89292 ']' 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89292 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89292 00:17:51.059 killing process with pid 89292 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89292' 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89292 00:17:51.059 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89292 00:17:51.318 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:51.318 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:51.318 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:51.318 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:17:51.318 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:51.318 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:17:51.318 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:17:51.318 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.318 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:51.318 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:51.318 12:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.318 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:17:51.577 00:17:51.577 real 0m38.901s 00:17:51.577 user 2m6.565s 00:17:51.577 sys 0m9.480s 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:51.577 ************************************ 00:17:51.577 END TEST nvmf_host_multipath_status 00:17:51.577 ************************************ 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.577 ************************************ 00:17:51.577 START TEST nvmf_discovery_remove_ifc 00:17:51.577 ************************************ 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:51.577 * Looking for test storage... 00:17:51.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.577 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:51.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.837 --rc genhtml_branch_coverage=1 00:17:51.837 --rc genhtml_function_coverage=1 00:17:51.837 --rc genhtml_legend=1 00:17:51.837 --rc geninfo_all_blocks=1 00:17:51.837 --rc geninfo_unexecuted_blocks=1 00:17:51.837 00:17:51.837 ' 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:51.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.837 --rc genhtml_branch_coverage=1 00:17:51.837 --rc genhtml_function_coverage=1 00:17:51.837 --rc genhtml_legend=1 00:17:51.837 --rc geninfo_all_blocks=1 00:17:51.837 --rc geninfo_unexecuted_blocks=1 00:17:51.837 00:17:51.837 ' 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:51.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.837 --rc genhtml_branch_coverage=1 00:17:51.837 --rc genhtml_function_coverage=1 00:17:51.837 --rc genhtml_legend=1 00:17:51.837 --rc geninfo_all_blocks=1 00:17:51.837 --rc geninfo_unexecuted_blocks=1 00:17:51.837 00:17:51.837 ' 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:51.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.837 --rc genhtml_branch_coverage=1 00:17:51.837 --rc genhtml_function_coverage=1 00:17:51.837 --rc genhtml_legend=1 00:17:51.837 --rc geninfo_all_blocks=1 00:17:51.837 --rc geninfo_unexecuted_blocks=1 00:17:51.837 00:17:51.837 ' 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.837 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.837 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:51.838 Cannot find device "nvmf_init_br" 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:51.838 Cannot find device "nvmf_init_br2" 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:51.838 Cannot find device "nvmf_tgt_br" 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.838 Cannot find device "nvmf_tgt_br2" 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:51.838 Cannot find device "nvmf_init_br" 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:51.838 Cannot find device "nvmf_init_br2" 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:51.838 Cannot find device "nvmf_tgt_br" 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:51.838 Cannot find device "nvmf_tgt_br2" 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:51.838 Cannot find device "nvmf_br" 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:51.838 Cannot find device "nvmf_init_if" 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:51.838 Cannot find device "nvmf_init_if2" 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.838 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:51.838 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:52.097 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:52.097 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:17:52.097 00:17:52.097 --- 10.0.0.3 ping statistics --- 00:17:52.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.097 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:52.097 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:52.097 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:52.097 00:17:52.097 --- 10.0.0.4 ping statistics --- 00:17:52.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.097 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:52.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:17:52.097 00:17:52.097 --- 10.0.0.1 ping statistics --- 00:17:52.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.097 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:52.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:17:52.097 00:17:52.097 --- 10.0.0.2 ping statistics --- 00:17:52.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.097 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=90735 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 90735 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 90735 ']' 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.097 12:21:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:52.097 [2024-12-05 12:21:22.938806] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:52.097 [2024-12-05 12:21:22.938902] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.355 [2024-12-05 12:21:23.092092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.355 [2024-12-05 12:21:23.145846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.355 [2024-12-05 12:21:23.145920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.355 [2024-12-05 12:21:23.145936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.355 [2024-12-05 12:21:23.145946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.355 [2024-12-05 12:21:23.145956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.355 [2024-12-05 12:21:23.146408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:52.614 [2024-12-05 12:21:23.357052] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.614 [2024-12-05 12:21:23.365195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:52.614 null0 00:17:52.614 [2024-12-05 12:21:23.397098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90773 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90773 /tmp/host.sock 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 90773 ']' 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:52.614 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.614 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:52.873 [2024-12-05 12:21:23.481810] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:17:52.873 [2024-12-05 12:21:23.481919] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90773 ] 00:17:52.873 [2024-12-05 12:21:23.628544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.873 [2024-12-05 12:21:23.681244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.132 12:21:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:54.068 [2024-12-05 12:21:24.900098] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:54.068 [2024-12-05 12:21:24.900144] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:54.068 [2024-12-05 12:21:24.900161] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:54.337 [2024-12-05 12:21:24.986200] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:54.337 [2024-12-05 12:21:25.040526] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:54.337 [2024-12-05 12:21:25.041274] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x150a010:1 started. 00:17:54.337 [2024-12-05 12:21:25.042970] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:54.337 [2024-12-05 12:21:25.043029] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:54.337 [2024-12-05 12:21:25.043059] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:54.337 [2024-12-05 12:21:25.043074] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:54.337 [2024-12-05 12:21:25.043091] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:54.337 [2024-12-05 12:21:25.048727] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x150a010 was disconnected and freed. delete nvme_qpair. 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.337 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:54.338 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.338 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:54.338 12:21:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:55.332 12:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:55.332 12:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:55.332 12:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:55.332 12:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:55.332 12:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.332 12:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:55.332 12:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:55.590 12:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.590 12:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:55.590 12:21:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:56.526 12:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:56.526 12:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:56.526 12:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:56.526 12:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:56.526 12:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:56.526 12:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.526 12:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:56.526 12:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.526 12:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:56.526 12:21:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:57.462 12:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:57.462 12:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:57.462 12:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.462 12:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:57.462 12:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:57.462 12:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:57.462 12:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:57.462 12:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.722 12:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:57.722 12:21:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:58.661 12:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:58.661 12:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:58.661 12:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.661 12:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:58.661 12:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:58.661 12:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:58.661 12:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:58.661 12:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.661 12:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:58.661 12:21:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:59.597 12:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:59.597 12:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:59.597 12:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:59.597 12:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:59.597 12:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.597 12:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:59.597 12:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:59.597 12:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.597 12:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:59.597 12:21:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:59.855 [2024-12-05 12:21:30.471514] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:59.855 [2024-12-05 12:21:30.471743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.855 [2024-12-05 12:21:30.471762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.855 [2024-12-05 12:21:30.471773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.855 [2024-12-05 12:21:30.471781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.855 [2024-12-05 12:21:30.471790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.855 [2024-12-05 12:21:30.471798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.855 [2024-12-05 12:21:30.471807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.855 [2024-12-05 12:21:30.471815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.855 [2024-12-05 12:21:30.471825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.855 [2024-12-05 12:21:30.471834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.855 [2024-12-05 12:21:30.471842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14744c0 is same with the state(6) to be set 00:17:59.855 [2024-12-05 12:21:30.481495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14744c0 (9): Bad file descriptor 00:17:59.855 [2024-12-05 12:21:30.491545] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:59.855 [2024-12-05 12:21:30.491565] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:59.855 [2024-12-05 12:21:30.491571] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:59.855 [2024-12-05 12:21:30.491576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:59.855 [2024-12-05 12:21:30.491607] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:00.789 12:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:00.789 12:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:00.789 12:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:00.789 12:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.789 12:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:00.789 12:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:00.789 12:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:00.789 [2024-12-05 12:21:31.517389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:00.789 [2024-12-05 12:21:31.517781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14744c0 with addr=10.0.0.3, port=4420 00:18:00.789 [2024-12-05 12:21:31.517917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14744c0 is same with the state(6) to be set 00:18:00.789 [2024-12-05 12:21:31.517960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14744c0 (9): Bad file descriptor 00:18:00.789 [2024-12-05 12:21:31.518403] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:18:00.789 [2024-12-05 12:21:31.518438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:00.789 [2024-12-05 12:21:31.518449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:00.789 [2024-12-05 12:21:31.518461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:00.789 [2024-12-05 12:21:31.518493] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:00.789 [2024-12-05 12:21:31.518500] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:00.789 [2024-12-05 12:21:31.518505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:00.789 [2024-12-05 12:21:31.518514] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:00.789 [2024-12-05 12:21:31.518520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:00.789 12:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.789 12:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:00.789 12:21:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:01.726 [2024-12-05 12:21:32.518546] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:01.726 [2024-12-05 12:21:32.518698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:01.726 [2024-12-05 12:21:32.518761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:01.726 [2024-12-05 12:21:32.518776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:01.726 [2024-12-05 12:21:32.518786] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:18:01.726 [2024-12-05 12:21:32.518796] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:01.726 [2024-12-05 12:21:32.518802] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:01.726 [2024-12-05 12:21:32.518806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:01.726 [2024-12-05 12:21:32.518836] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:18:01.726 [2024-12-05 12:21:32.518872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.726 [2024-12-05 12:21:32.518886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.726 [2024-12-05 12:21:32.518898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.726 [2024-12-05 12:21:32.518907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.726 [2024-12-05 12:21:32.518917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.726 [2024-12-05 12:21:32.518925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.726 [2024-12-05 12:21:32.518934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.726 [2024-12-05 12:21:32.518942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.726 [2024-12-05 12:21:32.518951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:01.726 [2024-12-05 12:21:32.518959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.726 [2024-12-05 12:21:32.518967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:18:01.726 [2024-12-05 12:21:32.519004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1474140 (9): Bad file descriptor 00:18:01.727 [2024-12-05 12:21:32.519997] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:01.727 [2024-12-05 12:21:32.520022] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:18:01.727 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:01.727 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:01.727 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.727 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:01.727 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.727 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:01.727 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:01.727 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:01.986 12:21:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:02.923 12:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:02.923 12:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:02.923 12:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:02.923 12:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:02.923 12:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:02.923 12:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.923 12:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:02.923 12:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.923 12:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:02.923 12:21:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:03.859 [2024-12-05 12:21:34.531372] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:03.859 [2024-12-05 12:21:34.531394] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:03.859 [2024-12-05 12:21:34.531412] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:03.859 [2024-12-05 12:21:34.617471] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:18:03.859 [2024-12-05 12:21:34.671759] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:18:03.860 [2024-12-05 12:21:34.672256] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x14e1370:1 started. 00:18:03.860 [2024-12-05 12:21:34.673602] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:03.860 [2024-12-05 12:21:34.673645] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:03.860 [2024-12-05 12:21:34.673666] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:03.860 [2024-12-05 12:21:34.673680] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:18:03.860 [2024-12-05 12:21:34.673687] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:03.860 [2024-12-05 12:21:34.679978] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x14e1370 was disconnected and freed. delete nvme_qpair. 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90773 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 90773 ']' 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 90773 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90773 00:18:04.118 killing process with pid 90773 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90773' 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 90773 00:18:04.118 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 90773 00:18:04.378 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:04.378 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:04.378 12:21:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:04.378 rmmod nvme_tcp 00:18:04.378 rmmod nvme_fabrics 00:18:04.378 rmmod nvme_keyring 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 90735 ']' 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 90735 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 90735 ']' 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 90735 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90735 00:18:04.378 killing process with pid 90735 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90735' 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 90735 00:18:04.378 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 90735 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:04.636 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:18:04.893 00:18:04.893 real 0m13.386s 00:18:04.893 user 0m23.486s 00:18:04.893 sys 0m1.673s 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.893 ************************************ 00:18:04.893 END TEST nvmf_discovery_remove_ifc 00:18:04.893 ************************************ 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.893 12:21:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.893 ************************************ 00:18:04.893 START TEST nvmf_identify_kernel_target 00:18:04.893 ************************************ 00:18:04.894 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:05.153 * Looking for test storage... 00:18:05.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.153 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:05.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.154 --rc genhtml_branch_coverage=1 00:18:05.154 --rc genhtml_function_coverage=1 00:18:05.154 --rc genhtml_legend=1 00:18:05.154 --rc geninfo_all_blocks=1 00:18:05.154 --rc geninfo_unexecuted_blocks=1 00:18:05.154 00:18:05.154 ' 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:05.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.154 --rc genhtml_branch_coverage=1 00:18:05.154 --rc genhtml_function_coverage=1 00:18:05.154 --rc genhtml_legend=1 00:18:05.154 --rc geninfo_all_blocks=1 00:18:05.154 --rc geninfo_unexecuted_blocks=1 00:18:05.154 00:18:05.154 ' 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:05.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.154 --rc genhtml_branch_coverage=1 00:18:05.154 --rc genhtml_function_coverage=1 00:18:05.154 --rc genhtml_legend=1 00:18:05.154 --rc geninfo_all_blocks=1 00:18:05.154 --rc geninfo_unexecuted_blocks=1 00:18:05.154 00:18:05.154 ' 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:05.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.154 --rc genhtml_branch_coverage=1 00:18:05.154 --rc genhtml_function_coverage=1 00:18:05.154 --rc genhtml_legend=1 00:18:05.154 --rc geninfo_all_blocks=1 00:18:05.154 --rc geninfo_unexecuted_blocks=1 00:18:05.154 00:18:05.154 ' 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:05.154 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:05.154 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:05.155 Cannot find device "nvmf_init_br" 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:05.155 Cannot find device "nvmf_init_br2" 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:05.155 Cannot find device "nvmf_tgt_br" 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:05.155 Cannot find device "nvmf_tgt_br2" 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:05.155 Cannot find device "nvmf_init_br" 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:18:05.155 12:21:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:05.155 Cannot find device "nvmf_init_br2" 00:18:05.155 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:18:05.155 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:05.155 Cannot find device "nvmf_tgt_br" 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:05.414 Cannot find device "nvmf_tgt_br2" 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:05.414 Cannot find device "nvmf_br" 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:05.414 Cannot find device "nvmf_init_if" 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:05.414 Cannot find device "nvmf_init_if2" 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:05.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:05.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:05.414 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:05.673 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:05.673 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.145 ms 00:18:05.673 00:18:05.673 --- 10.0.0.3 ping statistics --- 00:18:05.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.673 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:18:05.673 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:05.674 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:05.674 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:18:05.674 00:18:05.674 --- 10.0.0.4 ping statistics --- 00:18:05.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.674 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:05.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:05.674 00:18:05.674 --- 10.0.0.1 ping statistics --- 00:18:05.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.674 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:05.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:18:05.674 00:18:05.674 --- 10.0.0.2 ping statistics --- 00:18:05.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.674 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:05.674 12:21:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:05.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:06.192 Waiting for block devices as requested 00:18:06.192 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:06.192 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:06.451 No valid GPT data, bailing 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:06.451 No valid GPT data, bailing 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:06.451 No valid GPT data, bailing 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:06.451 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:06.710 No valid GPT data, bailing 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -a 10.0.0.1 -t tcp -s 4420 00:18:06.710 00:18:06.710 Discovery Log Number of Records 2, Generation counter 2 00:18:06.710 =====Discovery Log Entry 0====== 00:18:06.710 trtype: tcp 00:18:06.710 adrfam: ipv4 00:18:06.710 subtype: current discovery subsystem 00:18:06.710 treq: not specified, sq flow control disable supported 00:18:06.710 portid: 1 00:18:06.710 trsvcid: 4420 00:18:06.710 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:06.710 traddr: 10.0.0.1 00:18:06.710 eflags: none 00:18:06.710 sectype: none 00:18:06.710 =====Discovery Log Entry 1====== 00:18:06.710 trtype: tcp 00:18:06.710 adrfam: ipv4 00:18:06.710 subtype: nvme subsystem 00:18:06.710 treq: not specified, sq flow control disable supported 00:18:06.710 portid: 1 00:18:06.710 trsvcid: 4420 00:18:06.710 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:06.710 traddr: 10.0.0.1 00:18:06.710 eflags: none 00:18:06.710 sectype: none 00:18:06.710 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:06.710 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:06.970 ===================================================== 00:18:06.970 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:06.970 ===================================================== 00:18:06.970 Controller Capabilities/Features 00:18:06.970 ================================ 00:18:06.970 Vendor ID: 0000 00:18:06.970 Subsystem Vendor ID: 0000 00:18:06.970 Serial Number: 9ed75764bc3c43c8835d 00:18:06.970 Model Number: Linux 00:18:06.970 Firmware Version: 6.8.9-20 00:18:06.970 Recommended Arb Burst: 0 00:18:06.970 IEEE OUI Identifier: 00 00 00 00:18:06.970 Multi-path I/O 00:18:06.970 May have multiple subsystem ports: No 00:18:06.970 May have multiple controllers: No 00:18:06.970 Associated with SR-IOV VF: No 00:18:06.970 Max Data Transfer Size: Unlimited 00:18:06.970 Max Number of Namespaces: 0 00:18:06.970 Max Number of I/O Queues: 1024 00:18:06.970 NVMe Specification Version (VS): 1.3 00:18:06.970 NVMe Specification Version (Identify): 1.3 00:18:06.970 Maximum Queue Entries: 1024 00:18:06.970 Contiguous Queues Required: No 00:18:06.970 Arbitration Mechanisms Supported 00:18:06.970 Weighted Round Robin: Not Supported 00:18:06.970 Vendor Specific: Not Supported 00:18:06.970 Reset Timeout: 7500 ms 00:18:06.970 Doorbell Stride: 4 bytes 00:18:06.970 NVM Subsystem Reset: Not Supported 00:18:06.970 Command Sets Supported 00:18:06.970 NVM Command Set: Supported 00:18:06.970 Boot Partition: Not Supported 00:18:06.970 Memory Page Size Minimum: 4096 bytes 00:18:06.970 Memory Page Size Maximum: 4096 bytes 00:18:06.970 Persistent Memory Region: Not Supported 00:18:06.970 Optional Asynchronous Events Supported 00:18:06.970 Namespace Attribute Notices: Not Supported 00:18:06.970 Firmware Activation Notices: Not Supported 00:18:06.970 ANA Change Notices: Not Supported 00:18:06.970 PLE Aggregate Log Change Notices: Not Supported 00:18:06.970 LBA Status Info Alert Notices: Not Supported 00:18:06.970 EGE Aggregate Log Change Notices: Not Supported 00:18:06.970 Normal NVM Subsystem Shutdown event: Not Supported 00:18:06.970 Zone Descriptor Change Notices: Not Supported 00:18:06.970 Discovery Log Change Notices: Supported 00:18:06.970 Controller Attributes 00:18:06.970 128-bit Host Identifier: Not Supported 00:18:06.970 Non-Operational Permissive Mode: Not Supported 00:18:06.970 NVM Sets: Not Supported 00:18:06.970 Read Recovery Levels: Not Supported 00:18:06.970 Endurance Groups: Not Supported 00:18:06.970 Predictable Latency Mode: Not Supported 00:18:06.970 Traffic Based Keep ALive: Not Supported 00:18:06.970 Namespace Granularity: Not Supported 00:18:06.970 SQ Associations: Not Supported 00:18:06.970 UUID List: Not Supported 00:18:06.970 Multi-Domain Subsystem: Not Supported 00:18:06.970 Fixed Capacity Management: Not Supported 00:18:06.970 Variable Capacity Management: Not Supported 00:18:06.970 Delete Endurance Group: Not Supported 00:18:06.970 Delete NVM Set: Not Supported 00:18:06.970 Extended LBA Formats Supported: Not Supported 00:18:06.970 Flexible Data Placement Supported: Not Supported 00:18:06.970 00:18:06.970 Controller Memory Buffer Support 00:18:06.970 ================================ 00:18:06.970 Supported: No 00:18:06.970 00:18:06.970 Persistent Memory Region Support 00:18:06.970 ================================ 00:18:06.970 Supported: No 00:18:06.970 00:18:06.970 Admin Command Set Attributes 00:18:06.970 ============================ 00:18:06.970 Security Send/Receive: Not Supported 00:18:06.970 Format NVM: Not Supported 00:18:06.970 Firmware Activate/Download: Not Supported 00:18:06.970 Namespace Management: Not Supported 00:18:06.970 Device Self-Test: Not Supported 00:18:06.970 Directives: Not Supported 00:18:06.970 NVMe-MI: Not Supported 00:18:06.970 Virtualization Management: Not Supported 00:18:06.970 Doorbell Buffer Config: Not Supported 00:18:06.970 Get LBA Status Capability: Not Supported 00:18:06.970 Command & Feature Lockdown Capability: Not Supported 00:18:06.970 Abort Command Limit: 1 00:18:06.970 Async Event Request Limit: 1 00:18:06.970 Number of Firmware Slots: N/A 00:18:06.970 Firmware Slot 1 Read-Only: N/A 00:18:06.970 Firmware Activation Without Reset: N/A 00:18:06.970 Multiple Update Detection Support: N/A 00:18:06.970 Firmware Update Granularity: No Information Provided 00:18:06.970 Per-Namespace SMART Log: No 00:18:06.970 Asymmetric Namespace Access Log Page: Not Supported 00:18:06.970 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:06.970 Command Effects Log Page: Not Supported 00:18:06.970 Get Log Page Extended Data: Supported 00:18:06.970 Telemetry Log Pages: Not Supported 00:18:06.970 Persistent Event Log Pages: Not Supported 00:18:06.970 Supported Log Pages Log Page: May Support 00:18:06.970 Commands Supported & Effects Log Page: Not Supported 00:18:06.970 Feature Identifiers & Effects Log Page:May Support 00:18:06.970 NVMe-MI Commands & Effects Log Page: May Support 00:18:06.970 Data Area 4 for Telemetry Log: Not Supported 00:18:06.970 Error Log Page Entries Supported: 1 00:18:06.970 Keep Alive: Not Supported 00:18:06.970 00:18:06.970 NVM Command Set Attributes 00:18:06.970 ========================== 00:18:06.970 Submission Queue Entry Size 00:18:06.970 Max: 1 00:18:06.970 Min: 1 00:18:06.970 Completion Queue Entry Size 00:18:06.970 Max: 1 00:18:06.970 Min: 1 00:18:06.970 Number of Namespaces: 0 00:18:06.970 Compare Command: Not Supported 00:18:06.970 Write Uncorrectable Command: Not Supported 00:18:06.970 Dataset Management Command: Not Supported 00:18:06.971 Write Zeroes Command: Not Supported 00:18:06.971 Set Features Save Field: Not Supported 00:18:06.971 Reservations: Not Supported 00:18:06.971 Timestamp: Not Supported 00:18:06.971 Copy: Not Supported 00:18:06.971 Volatile Write Cache: Not Present 00:18:06.971 Atomic Write Unit (Normal): 1 00:18:06.971 Atomic Write Unit (PFail): 1 00:18:06.971 Atomic Compare & Write Unit: 1 00:18:06.971 Fused Compare & Write: Not Supported 00:18:06.971 Scatter-Gather List 00:18:06.971 SGL Command Set: Supported 00:18:06.971 SGL Keyed: Not Supported 00:18:06.971 SGL Bit Bucket Descriptor: Not Supported 00:18:06.971 SGL Metadata Pointer: Not Supported 00:18:06.971 Oversized SGL: Not Supported 00:18:06.971 SGL Metadata Address: Not Supported 00:18:06.971 SGL Offset: Supported 00:18:06.971 Transport SGL Data Block: Not Supported 00:18:06.971 Replay Protected Memory Block: Not Supported 00:18:06.971 00:18:06.971 Firmware Slot Information 00:18:06.971 ========================= 00:18:06.971 Active slot: 0 00:18:06.971 00:18:06.971 00:18:06.971 Error Log 00:18:06.971 ========= 00:18:06.971 00:18:06.971 Active Namespaces 00:18:06.971 ================= 00:18:06.971 Discovery Log Page 00:18:06.971 ================== 00:18:06.971 Generation Counter: 2 00:18:06.971 Number of Records: 2 00:18:06.971 Record Format: 0 00:18:06.971 00:18:06.971 Discovery Log Entry 0 00:18:06.971 ---------------------- 00:18:06.971 Transport Type: 3 (TCP) 00:18:06.971 Address Family: 1 (IPv4) 00:18:06.971 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:06.971 Entry Flags: 00:18:06.971 Duplicate Returned Information: 0 00:18:06.971 Explicit Persistent Connection Support for Discovery: 0 00:18:06.971 Transport Requirements: 00:18:06.971 Secure Channel: Not Specified 00:18:06.971 Port ID: 1 (0x0001) 00:18:06.971 Controller ID: 65535 (0xffff) 00:18:06.971 Admin Max SQ Size: 32 00:18:06.971 Transport Service Identifier: 4420 00:18:06.971 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:06.971 Transport Address: 10.0.0.1 00:18:06.971 Discovery Log Entry 1 00:18:06.971 ---------------------- 00:18:06.971 Transport Type: 3 (TCP) 00:18:06.971 Address Family: 1 (IPv4) 00:18:06.971 Subsystem Type: 2 (NVM Subsystem) 00:18:06.971 Entry Flags: 00:18:06.971 Duplicate Returned Information: 0 00:18:06.971 Explicit Persistent Connection Support for Discovery: 0 00:18:06.971 Transport Requirements: 00:18:06.971 Secure Channel: Not Specified 00:18:06.971 Port ID: 1 (0x0001) 00:18:06.971 Controller ID: 65535 (0xffff) 00:18:06.971 Admin Max SQ Size: 32 00:18:06.971 Transport Service Identifier: 4420 00:18:06.971 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:06.971 Transport Address: 10.0.0.1 00:18:06.971 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:06.971 get_feature(0x01) failed 00:18:06.971 get_feature(0x02) failed 00:18:06.971 get_feature(0x04) failed 00:18:06.971 ===================================================== 00:18:06.971 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:06.971 ===================================================== 00:18:06.971 Controller Capabilities/Features 00:18:06.971 ================================ 00:18:06.971 Vendor ID: 0000 00:18:06.971 Subsystem Vendor ID: 0000 00:18:06.971 Serial Number: 0a3643ce58776d1d3bfa 00:18:06.971 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:06.971 Firmware Version: 6.8.9-20 00:18:06.971 Recommended Arb Burst: 6 00:18:06.971 IEEE OUI Identifier: 00 00 00 00:18:06.971 Multi-path I/O 00:18:06.971 May have multiple subsystem ports: Yes 00:18:06.971 May have multiple controllers: Yes 00:18:06.971 Associated with SR-IOV VF: No 00:18:06.971 Max Data Transfer Size: Unlimited 00:18:06.971 Max Number of Namespaces: 1024 00:18:06.971 Max Number of I/O Queues: 128 00:18:06.971 NVMe Specification Version (VS): 1.3 00:18:06.971 NVMe Specification Version (Identify): 1.3 00:18:06.971 Maximum Queue Entries: 1024 00:18:06.971 Contiguous Queues Required: No 00:18:06.971 Arbitration Mechanisms Supported 00:18:06.971 Weighted Round Robin: Not Supported 00:18:06.971 Vendor Specific: Not Supported 00:18:06.971 Reset Timeout: 7500 ms 00:18:06.971 Doorbell Stride: 4 bytes 00:18:06.971 NVM Subsystem Reset: Not Supported 00:18:06.971 Command Sets Supported 00:18:06.971 NVM Command Set: Supported 00:18:06.971 Boot Partition: Not Supported 00:18:06.971 Memory Page Size Minimum: 4096 bytes 00:18:06.971 Memory Page Size Maximum: 4096 bytes 00:18:06.971 Persistent Memory Region: Not Supported 00:18:06.971 Optional Asynchronous Events Supported 00:18:06.971 Namespace Attribute Notices: Supported 00:18:06.971 Firmware Activation Notices: Not Supported 00:18:06.971 ANA Change Notices: Supported 00:18:06.971 PLE Aggregate Log Change Notices: Not Supported 00:18:06.971 LBA Status Info Alert Notices: Not Supported 00:18:06.971 EGE Aggregate Log Change Notices: Not Supported 00:18:06.971 Normal NVM Subsystem Shutdown event: Not Supported 00:18:06.971 Zone Descriptor Change Notices: Not Supported 00:18:06.971 Discovery Log Change Notices: Not Supported 00:18:06.971 Controller Attributes 00:18:06.971 128-bit Host Identifier: Supported 00:18:06.971 Non-Operational Permissive Mode: Not Supported 00:18:06.971 NVM Sets: Not Supported 00:18:06.971 Read Recovery Levels: Not Supported 00:18:06.971 Endurance Groups: Not Supported 00:18:06.971 Predictable Latency Mode: Not Supported 00:18:06.971 Traffic Based Keep ALive: Supported 00:18:06.971 Namespace Granularity: Not Supported 00:18:06.971 SQ Associations: Not Supported 00:18:06.971 UUID List: Not Supported 00:18:06.971 Multi-Domain Subsystem: Not Supported 00:18:06.971 Fixed Capacity Management: Not Supported 00:18:06.971 Variable Capacity Management: Not Supported 00:18:06.971 Delete Endurance Group: Not Supported 00:18:06.971 Delete NVM Set: Not Supported 00:18:06.971 Extended LBA Formats Supported: Not Supported 00:18:06.971 Flexible Data Placement Supported: Not Supported 00:18:06.971 00:18:06.971 Controller Memory Buffer Support 00:18:06.971 ================================ 00:18:06.971 Supported: No 00:18:06.971 00:18:06.971 Persistent Memory Region Support 00:18:06.971 ================================ 00:18:06.971 Supported: No 00:18:06.971 00:18:06.971 Admin Command Set Attributes 00:18:06.971 ============================ 00:18:06.971 Security Send/Receive: Not Supported 00:18:06.971 Format NVM: Not Supported 00:18:06.971 Firmware Activate/Download: Not Supported 00:18:06.971 Namespace Management: Not Supported 00:18:06.971 Device Self-Test: Not Supported 00:18:06.971 Directives: Not Supported 00:18:06.971 NVMe-MI: Not Supported 00:18:06.971 Virtualization Management: Not Supported 00:18:06.971 Doorbell Buffer Config: Not Supported 00:18:06.971 Get LBA Status Capability: Not Supported 00:18:06.971 Command & Feature Lockdown Capability: Not Supported 00:18:06.971 Abort Command Limit: 4 00:18:06.971 Async Event Request Limit: 4 00:18:06.971 Number of Firmware Slots: N/A 00:18:06.971 Firmware Slot 1 Read-Only: N/A 00:18:06.971 Firmware Activation Without Reset: N/A 00:18:06.971 Multiple Update Detection Support: N/A 00:18:06.971 Firmware Update Granularity: No Information Provided 00:18:06.971 Per-Namespace SMART Log: Yes 00:18:06.971 Asymmetric Namespace Access Log Page: Supported 00:18:06.971 ANA Transition Time : 10 sec 00:18:06.971 00:18:06.971 Asymmetric Namespace Access Capabilities 00:18:06.971 ANA Optimized State : Supported 00:18:06.971 ANA Non-Optimized State : Supported 00:18:06.971 ANA Inaccessible State : Supported 00:18:06.971 ANA Persistent Loss State : Supported 00:18:06.971 ANA Change State : Supported 00:18:06.971 ANAGRPID is not changed : No 00:18:06.971 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:06.971 00:18:06.971 ANA Group Identifier Maximum : 128 00:18:06.971 Number of ANA Group Identifiers : 128 00:18:06.971 Max Number of Allowed Namespaces : 1024 00:18:06.971 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:06.971 Command Effects Log Page: Supported 00:18:06.971 Get Log Page Extended Data: Supported 00:18:06.971 Telemetry Log Pages: Not Supported 00:18:06.971 Persistent Event Log Pages: Not Supported 00:18:06.971 Supported Log Pages Log Page: May Support 00:18:06.971 Commands Supported & Effects Log Page: Not Supported 00:18:06.971 Feature Identifiers & Effects Log Page:May Support 00:18:06.971 NVMe-MI Commands & Effects Log Page: May Support 00:18:06.971 Data Area 4 for Telemetry Log: Not Supported 00:18:06.971 Error Log Page Entries Supported: 128 00:18:06.971 Keep Alive: Supported 00:18:06.971 Keep Alive Granularity: 1000 ms 00:18:06.971 00:18:06.971 NVM Command Set Attributes 00:18:06.971 ========================== 00:18:06.971 Submission Queue Entry Size 00:18:06.972 Max: 64 00:18:06.972 Min: 64 00:18:06.972 Completion Queue Entry Size 00:18:06.972 Max: 16 00:18:06.972 Min: 16 00:18:06.972 Number of Namespaces: 1024 00:18:06.972 Compare Command: Not Supported 00:18:06.972 Write Uncorrectable Command: Not Supported 00:18:06.972 Dataset Management Command: Supported 00:18:06.972 Write Zeroes Command: Supported 00:18:06.972 Set Features Save Field: Not Supported 00:18:06.972 Reservations: Not Supported 00:18:06.972 Timestamp: Not Supported 00:18:06.972 Copy: Not Supported 00:18:06.972 Volatile Write Cache: Present 00:18:06.972 Atomic Write Unit (Normal): 1 00:18:06.972 Atomic Write Unit (PFail): 1 00:18:06.972 Atomic Compare & Write Unit: 1 00:18:06.972 Fused Compare & Write: Not Supported 00:18:06.972 Scatter-Gather List 00:18:06.972 SGL Command Set: Supported 00:18:06.972 SGL Keyed: Not Supported 00:18:06.972 SGL Bit Bucket Descriptor: Not Supported 00:18:06.972 SGL Metadata Pointer: Not Supported 00:18:06.972 Oversized SGL: Not Supported 00:18:06.972 SGL Metadata Address: Not Supported 00:18:06.972 SGL Offset: Supported 00:18:06.972 Transport SGL Data Block: Not Supported 00:18:06.972 Replay Protected Memory Block: Not Supported 00:18:06.972 00:18:06.972 Firmware Slot Information 00:18:06.972 ========================= 00:18:06.972 Active slot: 0 00:18:06.972 00:18:06.972 Asymmetric Namespace Access 00:18:06.972 =========================== 00:18:06.972 Change Count : 0 00:18:06.972 Number of ANA Group Descriptors : 1 00:18:06.972 ANA Group Descriptor : 0 00:18:06.972 ANA Group ID : 1 00:18:06.972 Number of NSID Values : 1 00:18:06.972 Change Count : 0 00:18:06.972 ANA State : 1 00:18:06.972 Namespace Identifier : 1 00:18:06.972 00:18:06.972 Commands Supported and Effects 00:18:06.972 ============================== 00:18:06.972 Admin Commands 00:18:06.972 -------------- 00:18:06.972 Get Log Page (02h): Supported 00:18:06.972 Identify (06h): Supported 00:18:06.972 Abort (08h): Supported 00:18:06.972 Set Features (09h): Supported 00:18:06.972 Get Features (0Ah): Supported 00:18:06.972 Asynchronous Event Request (0Ch): Supported 00:18:06.972 Keep Alive (18h): Supported 00:18:06.972 I/O Commands 00:18:06.972 ------------ 00:18:06.972 Flush (00h): Supported 00:18:06.972 Write (01h): Supported LBA-Change 00:18:06.972 Read (02h): Supported 00:18:06.972 Write Zeroes (08h): Supported LBA-Change 00:18:06.972 Dataset Management (09h): Supported 00:18:06.972 00:18:06.972 Error Log 00:18:06.972 ========= 00:18:06.972 Entry: 0 00:18:06.972 Error Count: 0x3 00:18:06.972 Submission Queue Id: 0x0 00:18:06.972 Command Id: 0x5 00:18:06.972 Phase Bit: 0 00:18:06.972 Status Code: 0x2 00:18:06.972 Status Code Type: 0x0 00:18:06.972 Do Not Retry: 1 00:18:07.232 Error Location: 0x28 00:18:07.232 LBA: 0x0 00:18:07.232 Namespace: 0x0 00:18:07.232 Vendor Log Page: 0x0 00:18:07.232 ----------- 00:18:07.232 Entry: 1 00:18:07.232 Error Count: 0x2 00:18:07.232 Submission Queue Id: 0x0 00:18:07.232 Command Id: 0x5 00:18:07.232 Phase Bit: 0 00:18:07.232 Status Code: 0x2 00:18:07.232 Status Code Type: 0x0 00:18:07.232 Do Not Retry: 1 00:18:07.232 Error Location: 0x28 00:18:07.232 LBA: 0x0 00:18:07.232 Namespace: 0x0 00:18:07.232 Vendor Log Page: 0x0 00:18:07.232 ----------- 00:18:07.232 Entry: 2 00:18:07.232 Error Count: 0x1 00:18:07.232 Submission Queue Id: 0x0 00:18:07.232 Command Id: 0x4 00:18:07.232 Phase Bit: 0 00:18:07.232 Status Code: 0x2 00:18:07.232 Status Code Type: 0x0 00:18:07.232 Do Not Retry: 1 00:18:07.232 Error Location: 0x28 00:18:07.232 LBA: 0x0 00:18:07.232 Namespace: 0x0 00:18:07.232 Vendor Log Page: 0x0 00:18:07.232 00:18:07.232 Number of Queues 00:18:07.232 ================ 00:18:07.232 Number of I/O Submission Queues: 128 00:18:07.232 Number of I/O Completion Queues: 128 00:18:07.232 00:18:07.232 ZNS Specific Controller Data 00:18:07.232 ============================ 00:18:07.232 Zone Append Size Limit: 0 00:18:07.232 00:18:07.232 00:18:07.232 Active Namespaces 00:18:07.232 ================= 00:18:07.232 get_feature(0x05) failed 00:18:07.232 Namespace ID:1 00:18:07.232 Command Set Identifier: NVM (00h) 00:18:07.232 Deallocate: Supported 00:18:07.232 Deallocated/Unwritten Error: Not Supported 00:18:07.232 Deallocated Read Value: Unknown 00:18:07.232 Deallocate in Write Zeroes: Not Supported 00:18:07.232 Deallocated Guard Field: 0xFFFF 00:18:07.232 Flush: Supported 00:18:07.232 Reservation: Not Supported 00:18:07.232 Namespace Sharing Capabilities: Multiple Controllers 00:18:07.232 Size (in LBAs): 1310720 (5GiB) 00:18:07.232 Capacity (in LBAs): 1310720 (5GiB) 00:18:07.232 Utilization (in LBAs): 1310720 (5GiB) 00:18:07.232 UUID: afece55c-809d-4e44-a25c-8a4afe625d15 00:18:07.232 Thin Provisioning: Not Supported 00:18:07.232 Per-NS Atomic Units: Yes 00:18:07.232 Atomic Boundary Size (Normal): 0 00:18:07.232 Atomic Boundary Size (PFail): 0 00:18:07.232 Atomic Boundary Offset: 0 00:18:07.232 NGUID/EUI64 Never Reused: No 00:18:07.232 ANA group ID: 1 00:18:07.232 Namespace Write Protected: No 00:18:07.232 Number of LBA Formats: 1 00:18:07.232 Current LBA Format: LBA Format #00 00:18:07.232 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:07.232 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:07.232 rmmod nvme_tcp 00:18:07.232 rmmod nvme_fabrics 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:07.232 12:21:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:07.232 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:07.232 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.232 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:07.232 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:07.232 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:07.232 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:07.232 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:07.491 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:07.491 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:07.491 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.491 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.491 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:07.491 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.491 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.491 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.491 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:18:07.491 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:07.491 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:07.492 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:18:07.492 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:07.492 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:07.492 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:07.492 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:07.492 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:07.492 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:07.492 12:21:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:08.427 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:08.427 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:08.427 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:08.427 00:18:08.427 real 0m3.497s 00:18:08.427 user 0m1.236s 00:18:08.427 sys 0m1.552s 00:18:08.427 12:21:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.427 12:21:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.427 ************************************ 00:18:08.427 END TEST nvmf_identify_kernel_target 00:18:08.427 ************************************ 00:18:08.427 12:21:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:08.427 12:21:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:08.427 12:21:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.427 12:21:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.427 ************************************ 00:18:08.427 START TEST nvmf_auth_host 00:18:08.427 ************************************ 00:18:08.427 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:08.686 * Looking for test storage... 00:18:08.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.686 --rc genhtml_branch_coverage=1 00:18:08.686 --rc genhtml_function_coverage=1 00:18:08.686 --rc genhtml_legend=1 00:18:08.686 --rc geninfo_all_blocks=1 00:18:08.686 --rc geninfo_unexecuted_blocks=1 00:18:08.686 00:18:08.686 ' 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.686 --rc genhtml_branch_coverage=1 00:18:08.686 --rc genhtml_function_coverage=1 00:18:08.686 --rc genhtml_legend=1 00:18:08.686 --rc geninfo_all_blocks=1 00:18:08.686 --rc geninfo_unexecuted_blocks=1 00:18:08.686 00:18:08.686 ' 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.686 --rc genhtml_branch_coverage=1 00:18:08.686 --rc genhtml_function_coverage=1 00:18:08.686 --rc genhtml_legend=1 00:18:08.686 --rc geninfo_all_blocks=1 00:18:08.686 --rc geninfo_unexecuted_blocks=1 00:18:08.686 00:18:08.686 ' 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:08.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.686 --rc genhtml_branch_coverage=1 00:18:08.686 --rc genhtml_function_coverage=1 00:18:08.686 --rc genhtml_legend=1 00:18:08.686 --rc geninfo_all_blocks=1 00:18:08.686 --rc geninfo_unexecuted_blocks=1 00:18:08.686 00:18:08.686 ' 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:08.686 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:08.686 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:08.687 Cannot find device "nvmf_init_br" 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:08.687 Cannot find device "nvmf_init_br2" 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:08.687 Cannot find device "nvmf_tgt_br" 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:08.687 Cannot find device "nvmf_tgt_br2" 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:08.687 Cannot find device "nvmf_init_br" 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:18:08.687 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:08.945 Cannot find device "nvmf_init_br2" 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:08.945 Cannot find device "nvmf_tgt_br" 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:08.945 Cannot find device "nvmf_tgt_br2" 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:08.945 Cannot find device "nvmf_br" 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:08.945 Cannot find device "nvmf_init_if" 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:08.945 Cannot find device "nvmf_init_if2" 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:08.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:08.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:08.945 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:09.202 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:09.202 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:09.202 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:09.202 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:09.202 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:09.202 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:09.202 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:09.202 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:09.202 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:09.202 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:09.202 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:09.202 00:18:09.202 --- 10.0.0.3 ping statistics --- 00:18:09.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.202 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:09.202 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:09.202 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:09.202 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:18:09.202 00:18:09.202 --- 10.0.0.4 ping statistics --- 00:18:09.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.202 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:09.202 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:09.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:18:09.202 00:18:09.202 --- 10.0.0.1 ping statistics --- 00:18:09.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.202 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:09.202 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:09.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:09.203 00:18:09.203 --- 10.0.0.2 ping statistics --- 00:18:09.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.203 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=91768 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 91768 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 91768 ']' 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.203 12:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.135 12:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.135 12:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:10.135 12:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:10.135 12:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:10.135 12:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ac167a7ea3d33b7e3eaf63323d19cc1b 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PrK 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ac167a7ea3d33b7e3eaf63323d19cc1b 0 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ac167a7ea3d33b7e3eaf63323d19cc1b 0 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ac167a7ea3d33b7e3eaf63323d19cc1b 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PrK 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PrK 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.PrK 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a892b2ba5f021881b1a5f0c1606177767e05e7eefe8bd3864a77c471133efa9a 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.G0M 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a892b2ba5f021881b1a5f0c1606177767e05e7eefe8bd3864a77c471133efa9a 3 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a892b2ba5f021881b1a5f0c1606177767e05e7eefe8bd3864a77c471133efa9a 3 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a892b2ba5f021881b1a5f0c1606177767e05e7eefe8bd3864a77c471133efa9a 00:18:10.394 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.G0M 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.G0M 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.G0M 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=190c4a698eb08be03a8d9e11d4bf9c33ed7d4eace1dc0cc8 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.dbs 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 190c4a698eb08be03a8d9e11d4bf9c33ed7d4eace1dc0cc8 0 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 190c4a698eb08be03a8d9e11d4bf9c33ed7d4eace1dc0cc8 0 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=190c4a698eb08be03a8d9e11d4bf9c33ed7d4eace1dc0cc8 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.dbs 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.dbs 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.dbs 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a329f1aa1f180f47b5ba9c81fb6230fefa64cafbafa9821b 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.sOC 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a329f1aa1f180f47b5ba9c81fb6230fefa64cafbafa9821b 2 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a329f1aa1f180f47b5ba9c81fb6230fefa64cafbafa9821b 2 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a329f1aa1f180f47b5ba9c81fb6230fefa64cafbafa9821b 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:10.395 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.sOC 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.sOC 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.sOC 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=27960348782a02000f79bd0b1917dbc4 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.B0l 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 27960348782a02000f79bd0b1917dbc4 1 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 27960348782a02000f79bd0b1917dbc4 1 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=27960348782a02000f79bd0b1917dbc4 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.B0l 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.B0l 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.B0l 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cfe740720ead813535f4bf3417ac31d6 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Wj2 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cfe740720ead813535f4bf3417ac31d6 1 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cfe740720ead813535f4bf3417ac31d6 1 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cfe740720ead813535f4bf3417ac31d6 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Wj2 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Wj2 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Wj2 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=019a2014c962b02a4517198ab6e58c4be488c8035e269c85 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ca7 00:18:10.654 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 019a2014c962b02a4517198ab6e58c4be488c8035e269c85 2 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 019a2014c962b02a4517198ab6e58c4be488c8035e269c85 2 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=019a2014c962b02a4517198ab6e58c4be488c8035e269c85 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ca7 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ca7 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ca7 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2145ae35908414ce6902764a96c2395d 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hnr 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2145ae35908414ce6902764a96c2395d 0 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2145ae35908414ce6902764a96c2395d 0 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2145ae35908414ce6902764a96c2395d 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:18:10.655 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:10.912 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hnr 00:18:10.912 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hnr 00:18:10.912 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.hnr 00:18:10.912 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2caa27c68493312613befab8398074653186bcdea7f0f19ccc4d366f547df3f6 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WxV 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2caa27c68493312613befab8398074653186bcdea7f0f19ccc4d366f547df3f6 3 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2caa27c68493312613befab8398074653186bcdea7f0f19ccc4d366f547df3f6 3 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2caa27c68493312613befab8398074653186bcdea7f0f19ccc4d366f547df3f6 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WxV 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WxV 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.WxV 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91768 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 91768 ']' 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.913 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PrK 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.G0M ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.G0M 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.dbs 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.sOC ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sOC 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.B0l 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Wj2 ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Wj2 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ca7 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.hnr ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.hnr 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.WxV 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.171 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:11.172 12:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:11.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:11.738 Waiting for block devices as requested 00:18:11.738 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:11.738 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:12.305 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:12.305 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:12.305 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:12.305 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:12.305 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:12.306 No valid GPT data, bailing 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:18:12.306 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:12.306 No valid GPT data, bailing 00:18:12.564 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:12.564 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:12.564 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:12.564 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:18:12.564 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:12.564 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:12.565 No valid GPT data, bailing 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:12.565 No valid GPT data, bailing 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -a 10.0.0.1 -t tcp -s 4420 00:18:12.565 00:18:12.565 Discovery Log Number of Records 2, Generation counter 2 00:18:12.565 =====Discovery Log Entry 0====== 00:18:12.565 trtype: tcp 00:18:12.565 adrfam: ipv4 00:18:12.565 subtype: current discovery subsystem 00:18:12.565 treq: not specified, sq flow control disable supported 00:18:12.565 portid: 1 00:18:12.565 trsvcid: 4420 00:18:12.565 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:12.565 traddr: 10.0.0.1 00:18:12.565 eflags: none 00:18:12.565 sectype: none 00:18:12.565 =====Discovery Log Entry 1====== 00:18:12.565 trtype: tcp 00:18:12.565 adrfam: ipv4 00:18:12.565 subtype: nvme subsystem 00:18:12.565 treq: not specified, sq flow control disable supported 00:18:12.565 portid: 1 00:18:12.565 trsvcid: 4420 00:18:12.565 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:12.565 traddr: 10.0.0.1 00:18:12.565 eflags: none 00:18:12.565 sectype: none 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:12.565 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.824 nvme0n1 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.824 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.083 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.083 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:13.083 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.083 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.084 nvme0n1 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.084 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.344 nvme0n1 00:18:13.344 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.344 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.344 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.344 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.344 12:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.344 nvme0n1 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.344 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.603 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.604 nvme0n1 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.604 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.862 nvme0n1 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:13.862 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.120 12:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.377 nvme0n1 00:18:14.377 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.377 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.378 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.635 nvme0n1 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.635 nvme0n1 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.635 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.894 nvme0n1 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:14.894 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.895 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.154 nvme0n1 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:15.154 12:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.722 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.981 nvme0n1 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:15.981 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.982 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.241 nvme0n1 00:18:16.241 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.241 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.241 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.241 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.241 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.241 12:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.241 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.500 nvme0n1 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.500 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:16.501 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:16.501 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:16.501 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:16.501 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.501 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.759 nvme0n1 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.759 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.018 nvme0n1 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:17.018 12:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.959 nvme0n1 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:18.959 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.960 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.960 12:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.218 nvme0n1 00:18:19.218 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.218 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.218 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.218 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:19.218 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.218 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.477 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.736 nvme0n1 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.736 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.994 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.253 nvme0n1 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.253 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.254 12:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.513 nvme0n1 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.513 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.514 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.514 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.514 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.514 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.514 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.514 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.514 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.081 nvme0n1 00:18:21.081 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.081 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.081 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.081 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.081 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.081 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.081 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.081 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.081 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.081 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.340 12:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.915 nvme0n1 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.915 12:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.525 nvme0n1 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:22.525 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.526 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.105 nvme0n1 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.105 12:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.673 nvme0n1 00:18:23.673 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.673 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.673 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.673 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.673 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.673 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.673 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.673 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.673 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.674 nvme0n1 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.674 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.933 nvme0n1 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.933 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.934 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 nvme0n1 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 nvme0n1 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 12:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.194 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.453 nvme0n1 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.453 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.711 nvme0n1 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.711 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.712 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.712 nvme0n1 00:18:24.712 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.712 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.712 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.712 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.712 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.712 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.971 nvme0n1 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.971 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.231 nvme0n1 00:18:25.231 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.231 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.231 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.231 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.231 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.231 12:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.231 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.491 nvme0n1 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.491 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.751 nvme0n1 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.751 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.010 nvme0n1 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:26.010 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.011 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.269 nvme0n1 00:18:26.269 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.269 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.269 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.269 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.269 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.269 12:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.269 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.269 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.269 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.269 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.269 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.269 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.269 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:26.269 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.269 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:26.269 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:26.269 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:26.269 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.270 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.528 nvme0n1 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.528 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.786 nvme0n1 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.786 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.787 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:26.787 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.787 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:26.787 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:26.787 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:26.787 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.787 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.787 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.045 nvme0n1 00:18:27.045 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.045 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.045 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.045 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.045 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.045 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.303 12:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.562 nvme0n1 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.562 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.820 nvme0n1 00:18:27.820 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.820 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.820 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.820 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.820 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.820 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.079 12:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.338 nvme0n1 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.338 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.339 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.339 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.339 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.339 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.339 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:28.339 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.339 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 nvme0n1 00:18:28.597 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.597 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:28.597 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:28.597 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.597 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.857 12:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.425 nvme0n1 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:29.425 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.426 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.994 nvme0n1 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:29.994 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.995 12:22:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.562 nvme0n1 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:30.562 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:30.563 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:30.563 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:30.563 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.563 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.129 nvme0n1 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.129 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.130 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.130 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.130 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.130 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.130 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:31.130 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.130 12:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.696 nvme0n1 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:31.696 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.697 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.956 nvme0n1 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.956 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.957 nvme0n1 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.957 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.216 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.216 nvme0n1 00:18:32.217 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.217 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.217 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.217 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.217 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.217 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.217 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.217 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.217 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.217 12:22:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.217 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.476 nvme0n1 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.476 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.477 nvme0n1 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.477 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 nvme0n1 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.997 nvme0n1 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:32.997 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.998 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.257 nvme0n1 00:18:33.257 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.257 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.257 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.257 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.257 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.257 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.257 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.257 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.257 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.257 12:22:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.257 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.517 nvme0n1 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.517 nvme0n1 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.517 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.777 nvme0n1 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.777 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.036 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.037 nvme0n1 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:34.037 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.296 12:22:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.296 nvme0n1 00:18:34.296 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.296 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.296 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.296 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.296 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.296 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.554 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.554 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.554 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.555 nvme0n1 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.555 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.813 nvme0n1 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.813 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:35.071 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.072 12:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.330 nvme0n1 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.330 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.896 nvme0n1 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:35.896 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.897 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.155 nvme0n1 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.155 12:22:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.414 nvme0n1 00:18:36.414 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.414 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.414 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.414 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.414 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.414 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.673 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.674 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.932 nvme0n1 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWMxNjdhN2VhM2QzM2I3ZTNlYWY2MzMyM2QxOWNjMWJZGbEl: 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: ]] 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTg5MmIyYmE1ZjAyMTg4MWIxYTVmMGMxNjA2MTc3NzY3ZTA1ZTdlZWZlOGJkMzg2NGE3N2M0NzExMzNlZmE5YR278M0=: 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.932 12:22:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.498 nvme0n1 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:37.498 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.499 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.066 nvme0n1 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.066 12:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.633 nvme0n1 00:18:38.633 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.633 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDE5YTIwMTRjOTYyYjAyYTQ1MTcxOThhYjZlNThjNGJlNDg4YzgwMzVlMjY5Yzg1pWR+9g==: 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: ]] 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjE0NWFlMzU5MDg0MTRjZTY5MDI3NjRhOTZjMjM5NWQ+L1j6: 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.634 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.200 nvme0n1 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmNhYTI3YzY4NDkzMzEyNjEzYmVmYWI4Mzk4MDc0NjUzMTg2YmNkZWE3ZjBmMTljY2M0ZDM2NmY1NDdkZjNmNgWon6E=: 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.200 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.201 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.201 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.201 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.201 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:39.201 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.201 12:22:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.766 nvme0n1 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.766 2024/12/05 12:22:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:39.766 request: 00:18:39.766 { 00:18:39.766 "method": "bdev_nvme_attach_controller", 00:18:39.766 "params": { 00:18:39.766 "name": "nvme0", 00:18:39.766 "trtype": "tcp", 00:18:39.766 "traddr": "10.0.0.1", 00:18:39.766 "adrfam": "ipv4", 00:18:39.766 "trsvcid": "4420", 00:18:39.766 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:39.766 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:39.766 "prchk_reftag": false, 00:18:39.766 "prchk_guard": false, 00:18:39.766 "hdgst": false, 00:18:39.766 "ddgst": false, 00:18:39.766 "allow_unrecognized_csi": false 00:18:39.766 } 00:18:39.766 } 00:18:39.766 Got JSON-RPC error response 00:18:39.766 GoRPCClient: error on JSON-RPC call 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.766 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.025 2024/12/05 12:22:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:40.025 request: 00:18:40.025 { 00:18:40.025 "method": "bdev_nvme_attach_controller", 00:18:40.025 "params": { 00:18:40.025 "name": "nvme0", 00:18:40.025 "trtype": "tcp", 00:18:40.025 "traddr": "10.0.0.1", 00:18:40.025 "adrfam": "ipv4", 00:18:40.025 "trsvcid": "4420", 00:18:40.025 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:40.025 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:40.025 "prchk_reftag": false, 00:18:40.025 "prchk_guard": false, 00:18:40.025 "hdgst": false, 00:18:40.025 "ddgst": false, 00:18:40.025 "dhchap_key": "key2", 00:18:40.025 "allow_unrecognized_csi": false 00:18:40.025 } 00:18:40.025 } 00:18:40.025 Got JSON-RPC error response 00:18:40.025 GoRPCClient: error on JSON-RPC call 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.025 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.025 2024/12/05 12:22:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:40.025 request: 00:18:40.025 { 00:18:40.025 "method": "bdev_nvme_attach_controller", 00:18:40.025 "params": { 00:18:40.025 "name": "nvme0", 00:18:40.025 "trtype": "tcp", 00:18:40.025 "traddr": "10.0.0.1", 00:18:40.025 "adrfam": "ipv4", 00:18:40.025 "trsvcid": "4420", 00:18:40.025 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:40.025 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:40.025 "prchk_reftag": false, 00:18:40.025 "prchk_guard": false, 00:18:40.025 "hdgst": false, 00:18:40.025 "ddgst": false, 00:18:40.025 "dhchap_key": "key1", 00:18:40.026 "dhchap_ctrlr_key": "ckey2", 00:18:40.026 "allow_unrecognized_csi": false 00:18:40.026 } 00:18:40.026 } 00:18:40.026 Got JSON-RPC error response 00:18:40.026 GoRPCClient: error on JSON-RPC call 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.026 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.284 nvme0n1 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.284 12:22:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.284 2024/12/05 12:22:11 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:18:40.284 request: 00:18:40.284 { 00:18:40.284 "method": "bdev_nvme_set_keys", 00:18:40.284 "params": { 00:18:40.284 "name": "nvme0", 00:18:40.284 "dhchap_key": "key1", 00:18:40.284 "dhchap_ctrlr_key": "ckey2" 00:18:40.284 } 00:18:40.284 } 00:18:40.284 Got JSON-RPC error response 00:18:40.284 GoRPCClient: error on JSON-RPC call 00:18:40.284 12:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:40.284 12:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:40.284 12:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.284 12:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.284 12:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.284 12:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.284 12:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.284 12:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.284 12:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:40.284 12:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.284 12:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:18:40.284 12:22:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:18:41.220 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.220 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:41.221 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.221 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.221 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTkwYzRhNjk4ZWIwOGJlMDNhOGQ5ZTExZDRiZjljMzNlZDdkNGVhY2UxZGMwY2M4CN+vLQ==: 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: ]] 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTMyOWYxYWExZjE4MGY0N2I1YmE5YzgxZmI2MjMwZmVmYTY0Y2FmYmFmYTk4MjFiFQRkTw==: 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.480 nvme0n1 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Mjc5NjAzNDg3ODJhMDIwMDBmNzliZDBiMTkxN2RiYzQbFjxM: 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: ]] 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2ZlNzQwNzIwZWFkODEzNTM1ZjRiZjM0MTdhYzMxZDbCGdar: 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.480 2024/12/05 12:22:12 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:18:41.480 request: 00:18:41.480 { 00:18:41.480 "method": "bdev_nvme_set_keys", 00:18:41.480 "params": { 00:18:41.480 "name": "nvme0", 00:18:41.480 "dhchap_key": "key2", 00:18:41.480 "dhchap_ctrlr_key": "ckey1" 00:18:41.480 } 00:18:41.480 } 00:18:41.480 Got JSON-RPC error response 00:18:41.480 GoRPCClient: error on JSON-RPC call 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.480 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.481 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.481 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.481 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:41.481 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.481 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.481 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:18:41.481 12:22:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:42.857 rmmod nvme_tcp 00:18:42.857 rmmod nvme_fabrics 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 91768 ']' 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 91768 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 91768 ']' 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 91768 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91768 00:18:42.857 killing process with pid 91768 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91768' 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 91768 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 91768 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:42.857 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:43.117 12:22:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:44.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:44.056 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:44.056 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:44.056 12:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.PrK /tmp/spdk.key-null.dbs /tmp/spdk.key-sha256.B0l /tmp/spdk.key-sha384.ca7 /tmp/spdk.key-sha512.WxV /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:44.056 12:22:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:44.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:44.315 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:44.315 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:44.573 00:18:44.573 real 0m35.956s 00:18:44.573 user 0m33.124s 00:18:44.573 sys 0m4.012s 00:18:44.573 12:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.573 12:22:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.573 ************************************ 00:18:44.573 END TEST nvmf_auth_host 00:18:44.573 ************************************ 00:18:44.573 12:22:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:18:44.573 12:22:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:44.573 12:22:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:44.573 12:22:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.573 12:22:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.573 ************************************ 00:18:44.573 START TEST nvmf_digest 00:18:44.573 ************************************ 00:18:44.573 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:44.573 * Looking for test storage... 00:18:44.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:44.573 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:44.573 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:18:44.573 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.832 --rc genhtml_branch_coverage=1 00:18:44.832 --rc genhtml_function_coverage=1 00:18:44.832 --rc genhtml_legend=1 00:18:44.832 --rc geninfo_all_blocks=1 00:18:44.832 --rc geninfo_unexecuted_blocks=1 00:18:44.832 00:18:44.832 ' 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.832 --rc genhtml_branch_coverage=1 00:18:44.832 --rc genhtml_function_coverage=1 00:18:44.832 --rc genhtml_legend=1 00:18:44.832 --rc geninfo_all_blocks=1 00:18:44.832 --rc geninfo_unexecuted_blocks=1 00:18:44.832 00:18:44.832 ' 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.832 --rc genhtml_branch_coverage=1 00:18:44.832 --rc genhtml_function_coverage=1 00:18:44.832 --rc genhtml_legend=1 00:18:44.832 --rc geninfo_all_blocks=1 00:18:44.832 --rc geninfo_unexecuted_blocks=1 00:18:44.832 00:18:44.832 ' 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.832 --rc genhtml_branch_coverage=1 00:18:44.832 --rc genhtml_function_coverage=1 00:18:44.832 --rc genhtml_legend=1 00:18:44.832 --rc geninfo_all_blocks=1 00:18:44.832 --rc geninfo_unexecuted_blocks=1 00:18:44.832 00:18:44.832 ' 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.832 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.833 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:44.833 Cannot find device "nvmf_init_br" 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:44.833 Cannot find device "nvmf_init_br2" 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:44.833 Cannot find device "nvmf_tgt_br" 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.833 Cannot find device "nvmf_tgt_br2" 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:44.833 Cannot find device "nvmf_init_br" 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:44.833 Cannot find device "nvmf_init_br2" 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:44.833 Cannot find device "nvmf_tgt_br" 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:44.833 Cannot find device "nvmf_tgt_br2" 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:44.833 Cannot find device "nvmf_br" 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:44.833 Cannot find device "nvmf_init_if" 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:44.833 Cannot find device "nvmf_init_if2" 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:44.833 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:45.092 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:45.092 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:18:45.092 00:18:45.092 --- 10.0.0.3 ping statistics --- 00:18:45.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.092 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:45.092 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:45.092 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:18:45.092 00:18:45.092 --- 10.0.0.4 ping statistics --- 00:18:45.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.092 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:45.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:18:45.092 00:18:45.092 --- 10.0.0.1 ping statistics --- 00:18:45.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.092 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:45.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:18:45.092 00:18:45.092 --- 10.0.0.2 ping statistics --- 00:18:45.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.092 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:45.092 ************************************ 00:18:45.092 START TEST nvmf_digest_clean 00:18:45.092 ************************************ 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=93432 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 93432 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93432 ']' 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.092 12:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:45.092 [2024-12-05 12:22:15.941533] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:18:45.092 [2024-12-05 12:22:15.941626] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.351 [2024-12-05 12:22:16.080320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.351 [2024-12-05 12:22:16.122950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.351 [2024-12-05 12:22:16.123016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.351 [2024-12-05 12:22:16.123026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.351 [2024-12-05 12:22:16.123034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.351 [2024-12-05 12:22:16.123050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.351 [2024-12-05 12:22:16.123426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.351 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.351 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:45.351 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.351 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.351 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:45.610 null0 00:18:45.610 [2024-12-05 12:22:16.356385] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.610 [2024-12-05 12:22:16.380478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93469 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93469 /var/tmp/bperf.sock 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93469 ']' 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.610 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:45.610 [2024-12-05 12:22:16.447011] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:18:45.610 [2024-12-05 12:22:16.447103] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93469 ] 00:18:45.870 [2024-12-05 12:22:16.598087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.870 [2024-12-05 12:22:16.657990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.870 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.870 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:45.870 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:45.870 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:45.870 12:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:46.438 12:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:46.438 12:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:46.696 nvme0n1 00:18:46.696 12:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:46.696 12:22:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:46.955 Running I/O for 2 seconds... 00:18:48.896 22956.00 IOPS, 89.67 MiB/s [2024-12-05T12:22:19.765Z] 22962.00 IOPS, 89.70 MiB/s 00:18:48.896 Latency(us) 00:18:48.896 [2024-12-05T12:22:19.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.896 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:48.896 nvme0n1 : 2.01 22988.23 89.80 0.00 0.00 5562.40 2800.17 11260.28 00:18:48.896 [2024-12-05T12:22:19.765Z] =================================================================================================================== 00:18:48.896 [2024-12-05T12:22:19.765Z] Total : 22988.23 89.80 0.00 0.00 5562.40 2800.17 11260.28 00:18:48.896 { 00:18:48.896 "results": [ 00:18:48.896 { 00:18:48.896 "job": "nvme0n1", 00:18:48.896 "core_mask": "0x2", 00:18:48.896 "workload": "randread", 00:18:48.896 "status": "finished", 00:18:48.896 "queue_depth": 128, 00:18:48.896 "io_size": 4096, 00:18:48.896 "runtime": 2.00607, 00:18:48.896 "iops": 22988.23071976551, 00:18:48.896 "mibps": 89.79777624908402, 00:18:48.896 "io_failed": 0, 00:18:48.896 "io_timeout": 0, 00:18:48.896 "avg_latency_us": 5562.399748933519, 00:18:48.896 "min_latency_us": 2800.1745454545453, 00:18:48.896 "max_latency_us": 11260.276363636363 00:18:48.896 } 00:18:48.896 ], 00:18:48.896 "core_count": 1 00:18:48.896 } 00:18:48.896 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:48.896 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:48.896 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:48.896 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:48.896 | select(.opcode=="crc32c") 00:18:48.896 | "\(.module_name) \(.executed)"' 00:18:48.896 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93469 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93469 ']' 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93469 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93469 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.155 killing process with pid 93469 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93469' 00:18:49.155 Received shutdown signal, test time was about 2.000000 seconds 00:18:49.155 00:18:49.155 Latency(us) 00:18:49.155 [2024-12-05T12:22:20.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.155 [2024-12-05T12:22:20.024Z] =================================================================================================================== 00:18:49.155 [2024-12-05T12:22:20.024Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93469 00:18:49.155 12:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93469 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93544 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93544 /var/tmp/bperf.sock 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93544 ']' 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:49.414 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.415 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:49.415 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:49.415 Zero copy mechanism will not be used. 00:18:49.415 [2024-12-05 12:22:20.213046] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:18:49.415 [2024-12-05 12:22:20.213138] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93544 ] 00:18:49.673 [2024-12-05 12:22:20.349395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.673 [2024-12-05 12:22:20.393228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.673 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.673 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:49.674 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:49.674 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:49.674 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:50.241 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:50.241 12:22:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:50.499 nvme0n1 00:18:50.499 12:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:50.499 12:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:50.499 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:50.499 Zero copy mechanism will not be used. 00:18:50.499 Running I/O for 2 seconds... 00:18:52.813 8988.00 IOPS, 1123.50 MiB/s [2024-12-05T12:22:23.682Z] 8979.00 IOPS, 1122.38 MiB/s 00:18:52.813 Latency(us) 00:18:52.813 [2024-12-05T12:22:23.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.813 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:52.813 nvme0n1 : 2.00 8975.93 1121.99 0.00 0.00 1779.58 543.65 3530.01 00:18:52.813 [2024-12-05T12:22:23.682Z] =================================================================================================================== 00:18:52.813 [2024-12-05T12:22:23.682Z] Total : 8975.93 1121.99 0.00 0.00 1779.58 543.65 3530.01 00:18:52.813 { 00:18:52.813 "results": [ 00:18:52.813 { 00:18:52.813 "job": "nvme0n1", 00:18:52.813 "core_mask": "0x2", 00:18:52.813 "workload": "randread", 00:18:52.813 "status": "finished", 00:18:52.813 "queue_depth": 16, 00:18:52.813 "io_size": 131072, 00:18:52.813 "runtime": 2.003023, 00:18:52.813 "iops": 8975.932877455725, 00:18:52.813 "mibps": 1121.9916096819657, 00:18:52.813 "io_failed": 0, 00:18:52.813 "io_timeout": 0, 00:18:52.813 "avg_latency_us": 1779.5778857151524, 00:18:52.813 "min_latency_us": 543.6509090909091, 00:18:52.813 "max_latency_us": 3530.0072727272727 00:18:52.813 } 00:18:52.813 ], 00:18:52.813 "core_count": 1 00:18:52.813 } 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:52.813 | select(.opcode=="crc32c") 00:18:52.813 | "\(.module_name) \(.executed)"' 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93544 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93544 ']' 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93544 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93544 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93544' 00:18:52.813 killing process with pid 93544 00:18:52.813 Received shutdown signal, test time was about 2.000000 seconds 00:18:52.813 00:18:52.813 Latency(us) 00:18:52.813 [2024-12-05T12:22:23.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.813 [2024-12-05T12:22:23.682Z] =================================================================================================================== 00:18:52.813 [2024-12-05T12:22:23.682Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93544 00:18:52.813 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93544 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93621 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93621 /var/tmp/bperf.sock 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93621 ']' 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.072 12:22:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:53.072 [2024-12-05 12:22:23.894388] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:18:53.072 [2024-12-05 12:22:23.894494] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93621 ] 00:18:53.330 [2024-12-05 12:22:24.047476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.330 [2024-12-05 12:22:24.102985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.263 12:22:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.263 12:22:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:54.263 12:22:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:54.263 12:22:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:54.263 12:22:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:54.533 12:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:54.533 12:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:54.793 nvme0n1 00:18:54.793 12:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:54.793 12:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:55.051 Running I/O for 2 seconds... 00:18:56.948 27075.00 IOPS, 105.76 MiB/s [2024-12-05T12:22:27.817Z] 27299.00 IOPS, 106.64 MiB/s 00:18:56.948 Latency(us) 00:18:56.948 [2024-12-05T12:22:27.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.948 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:56.948 nvme0n1 : 2.01 27277.19 106.55 0.00 0.00 4687.89 1966.08 10366.60 00:18:56.948 [2024-12-05T12:22:27.817Z] =================================================================================================================== 00:18:56.948 [2024-12-05T12:22:27.817Z] Total : 27277.19 106.55 0.00 0.00 4687.89 1966.08 10366.60 00:18:56.948 { 00:18:56.948 "results": [ 00:18:56.948 { 00:18:56.948 "job": "nvme0n1", 00:18:56.948 "core_mask": "0x2", 00:18:56.948 "workload": "randwrite", 00:18:56.948 "status": "finished", 00:18:56.948 "queue_depth": 128, 00:18:56.948 "io_size": 4096, 00:18:56.948 "runtime": 2.007538, 00:18:56.948 "iops": 27277.192262363154, 00:18:56.948 "mibps": 106.55153227485607, 00:18:56.948 "io_failed": 0, 00:18:56.948 "io_timeout": 0, 00:18:56.948 "avg_latency_us": 4687.894711733847, 00:18:56.948 "min_latency_us": 1966.08, 00:18:56.948 "max_latency_us": 10366.603636363636 00:18:56.948 } 00:18:56.948 ], 00:18:56.948 "core_count": 1 00:18:56.948 } 00:18:56.948 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:56.948 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:56.948 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:56.948 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:56.948 | select(.opcode=="crc32c") 00:18:56.948 | "\(.module_name) \(.executed)"' 00:18:56.948 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:57.206 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:57.207 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:57.207 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:57.207 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:57.207 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93621 00:18:57.207 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93621 ']' 00:18:57.207 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93621 00:18:57.207 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:57.207 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.207 12:22:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93621 00:18:57.207 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:57.207 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:57.207 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93621' 00:18:57.207 killing process with pid 93621 00:18:57.207 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93621 00:18:57.207 Received shutdown signal, test time was about 2.000000 seconds 00:18:57.207 00:18:57.207 Latency(us) 00:18:57.207 [2024-12-05T12:22:28.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.207 [2024-12-05T12:22:28.076Z] =================================================================================================================== 00:18:57.207 [2024-12-05T12:22:28.076Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.207 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93621 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93708 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93708 /var/tmp/bperf.sock 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93708 ']' 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:57.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.466 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:57.466 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:57.466 Zero copy mechanism will not be used. 00:18:57.466 [2024-12-05 12:22:28.301381] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:18:57.466 [2024-12-05 12:22:28.301471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93708 ] 00:18:57.724 [2024-12-05 12:22:28.437823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.724 [2024-12-05 12:22:28.475530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.724 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.724 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:57.724 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:57.724 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:57.724 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:58.291 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:58.291 12:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:58.291 nvme0n1 00:18:58.550 12:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:58.550 12:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:58.550 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:58.550 Zero copy mechanism will not be used. 00:18:58.550 Running I/O for 2 seconds... 00:19:00.420 6640.00 IOPS, 830.00 MiB/s [2024-12-05T12:22:31.289Z] 6659.00 IOPS, 832.38 MiB/s 00:19:00.420 Latency(us) 00:19:00.420 [2024-12-05T12:22:31.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.420 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:00.420 nvme0n1 : 2.00 6657.70 832.21 0.00 0.00 2398.38 1735.21 5510.98 00:19:00.420 [2024-12-05T12:22:31.289Z] =================================================================================================================== 00:19:00.420 [2024-12-05T12:22:31.289Z] Total : 6657.70 832.21 0.00 0.00 2398.38 1735.21 5510.98 00:19:00.420 { 00:19:00.420 "results": [ 00:19:00.420 { 00:19:00.420 "job": "nvme0n1", 00:19:00.420 "core_mask": "0x2", 00:19:00.420 "workload": "randwrite", 00:19:00.420 "status": "finished", 00:19:00.420 "queue_depth": 16, 00:19:00.420 "io_size": 131072, 00:19:00.420 "runtime": 2.003845, 00:19:00.420 "iops": 6657.70057065292, 00:19:00.420 "mibps": 832.212571331615, 00:19:00.420 "io_failed": 0, 00:19:00.420 "io_timeout": 0, 00:19:00.420 "avg_latency_us": 2398.3818729684976, 00:19:00.420 "min_latency_us": 1735.2145454545455, 00:19:00.420 "max_latency_us": 5510.981818181818 00:19:00.420 } 00:19:00.420 ], 00:19:00.420 "core_count": 1 00:19:00.420 } 00:19:00.420 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:00.420 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:00.420 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:00.420 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:00.420 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:00.420 | select(.opcode=="crc32c") 00:19:00.420 | "\(.module_name) \(.executed)"' 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93708 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93708 ']' 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93708 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93708 00:19:00.988 killing process with pid 93708 00:19:00.988 Received shutdown signal, test time was about 2.000000 seconds 00:19:00.988 00:19:00.988 Latency(us) 00:19:00.988 [2024-12-05T12:22:31.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.988 [2024-12-05T12:22:31.857Z] =================================================================================================================== 00:19:00.988 [2024-12-05T12:22:31.857Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93708' 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93708 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93708 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93432 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93432 ']' 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93432 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93432 00:19:00.988 killing process with pid 93432 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93432' 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93432 00:19:00.988 12:22:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93432 00:19:01.247 00:19:01.247 real 0m16.156s 00:19:01.247 user 0m29.996s 00:19:01.247 sys 0m5.155s 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.247 ************************************ 00:19:01.247 END TEST nvmf_digest_clean 00:19:01.247 ************************************ 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:01.247 ************************************ 00:19:01.247 START TEST nvmf_digest_error 00:19:01.247 ************************************ 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=93809 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 93809 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93809 ']' 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.247 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:01.506 [2024-12-05 12:22:32.161943] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:19:01.506 [2024-12-05 12:22:32.162047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.506 [2024-12-05 12:22:32.306417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.506 [2024-12-05 12:22:32.350001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.506 [2024-12-05 12:22:32.350068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.506 [2024-12-05 12:22:32.350080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.506 [2024-12-05 12:22:32.350087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.506 [2024-12-05 12:22:32.350093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.506 [2024-12-05 12:22:32.350437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:01.766 [2024-12-05 12:22:32.462812] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:01.766 null0 00:19:01.766 [2024-12-05 12:22:32.574901] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.766 [2024-12-05 12:22:32.599033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93838 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93838 /var/tmp/bperf.sock 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93838 ']' 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.766 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:02.025 [2024-12-05 12:22:32.655375] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:19:02.025 [2024-12-05 12:22:32.655460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93838 ] 00:19:02.025 [2024-12-05 12:22:32.794990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.025 [2024-12-05 12:22:32.837445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.284 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.284 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:02.284 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:02.284 12:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:02.543 12:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:02.543 12:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.543 12:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:02.543 12:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.543 12:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:02.543 12:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:02.801 nvme0n1 00:19:02.801 12:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:02.801 12:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.801 12:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:02.801 12:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.801 12:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:02.802 12:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:02.802 Running I/O for 2 seconds... 00:19:03.060 [2024-12-05 12:22:33.680541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.060 [2024-12-05 12:22:33.680593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.060 [2024-12-05 12:22:33.680606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.060 [2024-12-05 12:22:33.690264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.060 [2024-12-05 12:22:33.690314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.060 [2024-12-05 12:22:33.690327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.060 [2024-12-05 12:22:33.702091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.060 [2024-12-05 12:22:33.702121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.060 [2024-12-05 12:22:33.702132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.060 [2024-12-05 12:22:33.714802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.060 [2024-12-05 12:22:33.714846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.060 [2024-12-05 12:22:33.714857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.060 [2024-12-05 12:22:33.725547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.060 [2024-12-05 12:22:33.725590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.060 [2024-12-05 12:22:33.725602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.060 [2024-12-05 12:22:33.739785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.060 [2024-12-05 12:22:33.739816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.060 [2024-12-05 12:22:33.739827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.060 [2024-12-05 12:22:33.750376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.060 [2024-12-05 12:22:33.750418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.060 [2024-12-05 12:22:33.750429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.060 [2024-12-05 12:22:33.761347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.060 [2024-12-05 12:22:33.761377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.060 [2024-12-05 12:22:33.761388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.060 [2024-12-05 12:22:33.774198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.060 [2024-12-05 12:22:33.774229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.060 [2024-12-05 12:22:33.774240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.060 [2024-12-05 12:22:33.786086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.060 [2024-12-05 12:22:33.786118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.786129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.061 [2024-12-05 12:22:33.795459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.061 [2024-12-05 12:22:33.795500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.795511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.061 [2024-12-05 12:22:33.808509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.061 [2024-12-05 12:22:33.808538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.808549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.061 [2024-12-05 12:22:33.820106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.061 [2024-12-05 12:22:33.820136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.820147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.061 [2024-12-05 12:22:33.830737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.061 [2024-12-05 12:22:33.830768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.830778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.061 [2024-12-05 12:22:33.842090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.061 [2024-12-05 12:22:33.842121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.842132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.061 [2024-12-05 12:22:33.853062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.061 [2024-12-05 12:22:33.853093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.853104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.061 [2024-12-05 12:22:33.864737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.061 [2024-12-05 12:22:33.864767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.864778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.061 [2024-12-05 12:22:33.876526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.061 [2024-12-05 12:22:33.876556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.876567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.061 [2024-12-05 12:22:33.886150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.061 [2024-12-05 12:22:33.886181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.886192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.061 [2024-12-05 12:22:33.899166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.061 [2024-12-05 12:22:33.899196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.899207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.061 [2024-12-05 12:22:33.911180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.061 [2024-12-05 12:22:33.911210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.911221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.061 [2024-12-05 12:22:33.920374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.061 [2024-12-05 12:22:33.920404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.061 [2024-12-05 12:22:33.920415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.320 [2024-12-05 12:22:33.932620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.320 [2024-12-05 12:22:33.932663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.320 [2024-12-05 12:22:33.932684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.320 [2024-12-05 12:22:33.944433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.320 [2024-12-05 12:22:33.944464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.320 [2024-12-05 12:22:33.944475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.320 [2024-12-05 12:22:33.956074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.320 [2024-12-05 12:22:33.956104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.320 [2024-12-05 12:22:33.956115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.320 [2024-12-05 12:22:33.967659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.320 [2024-12-05 12:22:33.967688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.320 [2024-12-05 12:22:33.967699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.320 [2024-12-05 12:22:33.979224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.320 [2024-12-05 12:22:33.979261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.320 [2024-12-05 12:22:33.979273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.320 [2024-12-05 12:22:33.989068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.320 [2024-12-05 12:22:33.989098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:33.989109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:33.998521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:33.998550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:33.998561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.010146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.010177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.010188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.022671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.022701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.022711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.032150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.032180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.032191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.044339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.044368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.044379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.055310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.055339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.055350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.066901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.066932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.066943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.076504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.076534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.076546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.087883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.087913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.087925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.099458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.099488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.099499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.108536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.108565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.108576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.119976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.120006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.120016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.130664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.130693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.130704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.142694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.142724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.142735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.154035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.154065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.154076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.165440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.165469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.165480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.174217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.174248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.174258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.321 [2024-12-05 12:22:34.185631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.321 [2024-12-05 12:22:34.185662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.321 [2024-12-05 12:22:34.185673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.196939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.196969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.196980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.206351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.206381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.206392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.219836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.219866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.219876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.231792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.231823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.231834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.243614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.243644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.243655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.253710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.253741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.253752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.265212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.265242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.265252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.277024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.277054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.277065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.288642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.288672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.288684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.298927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.298957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.298971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.309950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.309980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.309991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.321526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.321555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.321566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.332919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.332949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.332961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.344578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.344608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.344619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.355071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.355100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.355111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.365770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.365800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.365811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.376327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.376356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.376366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.389641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.389671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.389682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.401738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.401768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.580 [2024-12-05 12:22:34.401779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.580 [2024-12-05 12:22:34.411215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.580 [2024-12-05 12:22:34.411251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.581 [2024-12-05 12:22:34.411263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.581 [2024-12-05 12:22:34.423229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.581 [2024-12-05 12:22:34.423266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.581 [2024-12-05 12:22:34.423290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.581 [2024-12-05 12:22:34.434752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.581 [2024-12-05 12:22:34.434783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.581 [2024-12-05 12:22:34.434794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.581 [2024-12-05 12:22:34.446341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.581 [2024-12-05 12:22:34.446371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.581 [2024-12-05 12:22:34.446382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.458736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.458767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.458778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.468495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.468524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.468536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.479746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.479777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.479788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.492141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.492172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.492183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.503130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.503160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.503171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.514973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.515004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.515015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.525419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.525448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.525459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.535808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.535838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.535849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.548195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.548232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.548251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.560349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.560382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.560393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.572778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.572813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.572824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.584738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.584775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.584795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.596682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.596717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.596737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.608686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.608719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.608739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.619406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.619438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.619449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.632955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.632990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.633001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.839 [2024-12-05 12:22:34.644602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.839 [2024-12-05 12:22:34.644635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.839 [2024-12-05 12:22:34.644647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.840 [2024-12-05 12:22:34.655075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.840 [2024-12-05 12:22:34.655110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.840 [2024-12-05 12:22:34.655129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.840 22338.00 IOPS, 87.26 MiB/s [2024-12-05T12:22:34.709Z] [2024-12-05 12:22:34.667618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.840 [2024-12-05 12:22:34.667654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.840 [2024-12-05 12:22:34.667672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.840 [2024-12-05 12:22:34.678957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.840 [2024-12-05 12:22:34.678992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.840 [2024-12-05 12:22:34.679003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.840 [2024-12-05 12:22:34.692234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.840 [2024-12-05 12:22:34.692270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.840 [2024-12-05 12:22:34.692299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:03.840 [2024-12-05 12:22:34.703125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:03.840 [2024-12-05 12:22:34.703162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:03.840 [2024-12-05 12:22:34.703182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.715702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.715737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.715758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.726091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.727303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.727330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.737864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.738064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.738098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.753416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.753679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.753698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.765127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.765311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.765347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.774813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.774849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.774861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.785645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.785680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.785692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.798740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.798776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.798788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.810998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.811033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.811045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.820673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.820850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.820867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.831574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.831734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.831766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.843461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.843497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.843510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.854783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.854944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.854977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.866067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.866242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.866259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.877758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.877933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.877951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.887805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.098 [2024-12-05 12:22:34.887841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.098 [2024-12-05 12:22:34.887853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.098 [2024-12-05 12:22:34.899783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.099 [2024-12-05 12:22:34.899820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.099 [2024-12-05 12:22:34.899832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.099 [2024-12-05 12:22:34.910320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.099 [2024-12-05 12:22:34.910355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.099 [2024-12-05 12:22:34.910367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.099 [2024-12-05 12:22:34.922253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.099 [2024-12-05 12:22:34.922300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.099 [2024-12-05 12:22:34.922330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.099 [2024-12-05 12:22:34.932016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.099 [2024-12-05 12:22:34.932052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.099 [2024-12-05 12:22:34.932064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.099 [2024-12-05 12:22:34.944229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.099 [2024-12-05 12:22:34.944266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.099 [2024-12-05 12:22:34.944310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.099 [2024-12-05 12:22:34.955267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.099 [2024-12-05 12:22:34.955308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.099 [2024-12-05 12:22:34.955320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:34.964933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:34.965092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:34.965126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:34.976882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:34.976919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:34.976931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:34.987514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:34.987736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:34.987754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:34.998499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:34.998671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:34.998687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:35.009600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:35.009776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:35.009793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:35.019955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:35.019991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:35.020003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:35.030974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:35.031009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:35.031021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:35.041964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:35.041999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:35.042011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:35.051838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:35.051873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:35.051884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:35.064160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:35.064196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:35.064208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:35.075432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:35.075469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:35.075482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:35.086543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:35.086578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:35.086590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:35.097616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:35.097650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:35.097663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:35.108886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:35.109043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.357 [2024-12-05 12:22:35.109076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.357 [2024-12-05 12:22:35.120900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.357 [2024-12-05 12:22:35.121089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.358 [2024-12-05 12:22:35.121233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.358 [2024-12-05 12:22:35.131763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.358 [2024-12-05 12:22:35.131952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.358 [2024-12-05 12:22:35.132073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.358 [2024-12-05 12:22:35.144185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.358 [2024-12-05 12:22:35.144388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.358 [2024-12-05 12:22:35.144522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.358 [2024-12-05 12:22:35.155861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.358 [2024-12-05 12:22:35.156046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.358 [2024-12-05 12:22:35.156167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.358 [2024-12-05 12:22:35.166533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.358 [2024-12-05 12:22:35.166724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.358 [2024-12-05 12:22:35.166863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.358 [2024-12-05 12:22:35.179642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.358 [2024-12-05 12:22:35.179835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.358 [2024-12-05 12:22:35.179973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.358 [2024-12-05 12:22:35.191829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.358 [2024-12-05 12:22:35.191999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.358 [2024-12-05 12:22:35.192134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.358 [2024-12-05 12:22:35.202672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.358 [2024-12-05 12:22:35.202845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.358 [2024-12-05 12:22:35.202999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.358 [2024-12-05 12:22:35.215943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.358 [2024-12-05 12:22:35.216136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.358 [2024-12-05 12:22:35.216275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.228301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.228520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.228540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.239036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.239072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.239084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.250876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.250912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.250923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.260825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.260860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.260872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.273230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.273265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.273311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.283288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.283322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.283335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.294368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.294404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.294415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.304111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.304146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.304157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.317172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.317207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.317219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.328741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.328776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.328787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.338159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.338345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.338362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.349523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.349726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.349758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.361685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.361878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.362014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.372497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.372689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.372826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.385199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.638 [2024-12-05 12:22:35.385411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.638 [2024-12-05 12:22:35.385533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.638 [2024-12-05 12:22:35.395132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.639 [2024-12-05 12:22:35.395362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-12-05 12:22:35.395485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.639 [2024-12-05 12:22:35.408564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.639 [2024-12-05 12:22:35.408758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-12-05 12:22:35.408895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.639 [2024-12-05 12:22:35.420691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.639 [2024-12-05 12:22:35.420883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-12-05 12:22:35.421004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.639 [2024-12-05 12:22:35.431667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.639 [2024-12-05 12:22:35.431860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-12-05 12:22:35.431995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.639 [2024-12-05 12:22:35.444019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.639 [2024-12-05 12:22:35.444211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-12-05 12:22:35.444367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.639 [2024-12-05 12:22:35.455623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.639 [2024-12-05 12:22:35.455815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-12-05 12:22:35.455936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.639 [2024-12-05 12:22:35.465564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.639 [2024-12-05 12:22:35.465801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-12-05 12:22:35.465925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.639 [2024-12-05 12:22:35.477197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.639 [2024-12-05 12:22:35.477404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-12-05 12:22:35.477526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.639 [2024-12-05 12:22:35.489837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.639 [2024-12-05 12:22:35.490062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-12-05 12:22:35.490195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.639 [2024-12-05 12:22:35.500656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.639 [2024-12-05 12:22:35.500846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.639 [2024-12-05 12:22:35.500985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.513358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.513566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.513695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.526326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.526516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.526638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.539400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.539642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.539815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.550230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.550478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.550600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.561976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.562167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.562396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.574065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.574274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.574442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.584526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.584717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.584836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.596300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.596492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.596628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.607707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.607878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.607896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.618781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.618937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.618969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.630400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.630435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.630447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.642799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.642835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.642863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 [2024-12-05 12:22:35.653046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.653082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.653095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 22286.50 IOPS, 87.06 MiB/s [2024-12-05T12:22:35.766Z] [2024-12-05 12:22:35.664452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19c1200) 00:19:04.897 [2024-12-05 12:22:35.664482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.897 [2024-12-05 12:22:35.664509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:04.897 00:19:04.897 Latency(us) 00:19:04.897 [2024-12-05T12:22:35.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.897 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:04.897 nvme0n1 : 2.01 22307.66 87.14 0.00 0.00 5730.55 2815.07 16086.11 00:19:04.897 [2024-12-05T12:22:35.766Z] =================================================================================================================== 00:19:04.897 [2024-12-05T12:22:35.766Z] Total : 22307.66 87.14 0.00 0.00 5730.55 2815.07 16086.11 00:19:04.897 { 00:19:04.897 "results": [ 00:19:04.897 { 00:19:04.898 "job": "nvme0n1", 00:19:04.898 "core_mask": "0x2", 00:19:04.898 "workload": "randread", 00:19:04.898 "status": "finished", 00:19:04.898 "queue_depth": 128, 00:19:04.898 "io_size": 4096, 00:19:04.898 "runtime": 2.006172, 00:19:04.898 "iops": 22307.658565666352, 00:19:04.898 "mibps": 87.13929127213419, 00:19:04.898 "io_failed": 0, 00:19:04.898 "io_timeout": 0, 00:19:04.898 "avg_latency_us": 5730.552472622455, 00:19:04.898 "min_latency_us": 2815.069090909091, 00:19:04.898 "max_latency_us": 16086.10909090909 00:19:04.898 } 00:19:04.898 ], 00:19:04.898 "core_count": 1 00:19:04.898 } 00:19:04.898 12:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:04.898 12:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:04.898 12:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:04.898 12:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:04.898 | .driver_specific 00:19:04.898 | .nvme_error 00:19:04.898 | .status_code 00:19:04.898 | .command_transient_transport_error' 00:19:05.155 12:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 175 > 0 )) 00:19:05.155 12:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93838 00:19:05.155 12:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93838 ']' 00:19:05.155 12:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93838 00:19:05.155 12:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:05.155 12:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.155 12:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93838 00:19:05.155 killing process with pid 93838 00:19:05.155 Received shutdown signal, test time was about 2.000000 seconds 00:19:05.155 00:19:05.155 Latency(us) 00:19:05.155 [2024-12-05T12:22:36.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.155 [2024-12-05T12:22:36.024Z] =================================================================================================================== 00:19:05.155 [2024-12-05T12:22:36.024Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.155 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:05.155 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:05.155 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93838' 00:19:05.155 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93838 00:19:05.155 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93838 00:19:05.412 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:05.412 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:05.412 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:05.413 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:05.413 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:05.413 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93915 00:19:05.413 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93915 /var/tmp/bperf.sock 00:19:05.413 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:05.413 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93915 ']' 00:19:05.413 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:05.413 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.413 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:05.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:05.413 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.413 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:05.413 [2024-12-05 12:22:36.246095] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:19:05.413 [2024-12-05 12:22:36.246381] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93915 ] 00:19:05.413 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:05.413 Zero copy mechanism will not be used. 00:19:05.670 [2024-12-05 12:22:36.392859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.670 [2024-12-05 12:22:36.433497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.929 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.929 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:05.929 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:05.929 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:06.186 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:06.187 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.187 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:06.187 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.187 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:06.187 12:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:06.445 nvme0n1 00:19:06.446 12:22:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:06.446 12:22:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.446 12:22:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:06.446 12:22:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.446 12:22:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:06.446 12:22:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:06.446 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:06.446 Zero copy mechanism will not be used. 00:19:06.446 Running I/O for 2 seconds... 00:19:06.446 [2024-12-05 12:22:37.211799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.211863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.211878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.215971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.216008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.216036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.218701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.218736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.218764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.222759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.222796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.222824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.226769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.226806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.226834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.231138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.231190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.231218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.234669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.234704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.234732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.237617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.237669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.237711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.241676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.241714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.241742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.246096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.246133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.246161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.249050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.249086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.249114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.252914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.252950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.252977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.256615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.256652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.256680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.260466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.260503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.260530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.264337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.264391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.264419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.267527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.267618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.267645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.271652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.271690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.271717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.275535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.275609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.275637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.279840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.279878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.279906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.284044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.284080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.284108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.287771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.287809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.287837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.291096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.291131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.291159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.295009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.295044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.295071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.298163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.446 [2024-12-05 12:22:37.298198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.446 [2024-12-05 12:22:37.298226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.446 [2024-12-05 12:22:37.302360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.447 [2024-12-05 12:22:37.302397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.447 [2024-12-05 12:22:37.302424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.447 [2024-12-05 12:22:37.306558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.447 [2024-12-05 12:22:37.306593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.447 [2024-12-05 12:22:37.306621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.447 [2024-12-05 12:22:37.311079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.447 [2024-12-05 12:22:37.311115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.447 [2024-12-05 12:22:37.311144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.314030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.314063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.314090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.318142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.318178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.318206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.322106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.322142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.322169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.325919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.325957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.325985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.328883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.328919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.328946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.332930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.332968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.332995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.337585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.337639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.337651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.341296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.341345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.341356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.345601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.345655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.345667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.349830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.349867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.349894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.352644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.352680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.352707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.356859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.356896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.356923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.360384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.360420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.360448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.364670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.364708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.364735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.367417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.367470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.367482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.371544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.371615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.707 [2024-12-05 12:22:37.371627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.707 [2024-12-05 12:22:37.375369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.707 [2024-12-05 12:22:37.375424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.375437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.379036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.379070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.379097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.382929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.382979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.383007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.386189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.386240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.386268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.390431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.390485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.390497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.393932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.393970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.393998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.396995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.397031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.397058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.400917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.400953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.400981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.405164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.405201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.405228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.409547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.409585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.409611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.413854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.413892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.413920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.416755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.416806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.416834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.420750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.420787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.420814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.425176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.425213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.425241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.428418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.428471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.428498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.432316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.432352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.432380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.436332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.436369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.436396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.440085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.440122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.440149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.443709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.443745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.443772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.447567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.447621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.447633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.450732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.450766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.450792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.454943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.454978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.455005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.458637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.458672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.458700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.462037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.462073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.462100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.466100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.466136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.466163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.469993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.470030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.708 [2024-12-05 12:22:37.470058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.708 [2024-12-05 12:22:37.473826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.708 [2024-12-05 12:22:37.473877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.473905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.476441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.476477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.476505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.480463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.480499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.480526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.484288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.484336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.484364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.488156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.488194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.488221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.491136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.491171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.491198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.495179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.495212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.495263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.499654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.499724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.499751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.503614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.503668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.503681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.508443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.508525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.508538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.514428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.514523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.514538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.519847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.519916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.519929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.524481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.524534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.524562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.528744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.528780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.528807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.532960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.532996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.533022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.536062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.536098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.536126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.540022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.540058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.540085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.544149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.544186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.544213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.548440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.548492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.548519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.552642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.552679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.552706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.555466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.555518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.555529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.559705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.559740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.559767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.563798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.563834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.563862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.568133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.568170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.568198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.709 [2024-12-05 12:22:37.572657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.709 [2024-12-05 12:22:37.572693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.709 [2024-12-05 12:22:37.572721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.576795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.576831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.576859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.580151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.580188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.580215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.584812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.584850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.584877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.588026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.588063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.588090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.591592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.591658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.591685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.595482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.595540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.595552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.598460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.598494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.598520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.601961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.601998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.602025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.606605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.606642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.606669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.610703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.610739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.610767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.613546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.613582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.613609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.617811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.617847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.617875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.622079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.622115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.622142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.625758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.625795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.625823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.629472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.629523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.629551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.633514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.633549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.633576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.636220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.636255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.636284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.639954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.639992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.640019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.643907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.643943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.643970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.647815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.647851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.647878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.650601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.650635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.650662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.654739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.654775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.654802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.658604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.658641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.658668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.662503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.662556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.662583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.970 [2024-12-05 12:22:37.666766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.970 [2024-12-05 12:22:37.666804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.970 [2024-12-05 12:22:37.666832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.669645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.669680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.669707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.673889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.673927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.673955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.678286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.678319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.678346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.682585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.682623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.682650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.686874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.686910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.686937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.689654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.689690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.689717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.693559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.693596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.693623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.697948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.697986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.698014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.702464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.702500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.702528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.706741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.706779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.706806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.710579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.710615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.710642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.713878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.713915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.713942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.717870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.717906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.717933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.721949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.721986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.722014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.725147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.725183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.725210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.729204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.729241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.729269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.732881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.732918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.732944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.736909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.736945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.736972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.739879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.739914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.739940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.744183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.744220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.744247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.747740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.747775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.747802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.751471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.751523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.751535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.755073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.755107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.755134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.758853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.758903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.758930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.762263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.762322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.762351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.766370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.766419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.766447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.769625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.971 [2024-12-05 12:22:37.769692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.971 [2024-12-05 12:22:37.769718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.971 [2024-12-05 12:22:37.773723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.773775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.773803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.777817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.777870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.777898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.781762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.781814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.781842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.784660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.784712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.784740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.788754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.788806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.788834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.792649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.792687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.792713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.795818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.795854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.795882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.799718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.799754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.799781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.804216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.804255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.804283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.807160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.807212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.807265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.811749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.811800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.811828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.816067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.816121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.816150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.819374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.819413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.819426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.823652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.823705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.823747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.828545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.828601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.828630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.972 [2024-12-05 12:22:37.833244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:06.972 [2024-12-05 12:22:37.833289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.972 [2024-12-05 12:22:37.833318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.232 [2024-12-05 12:22:37.836497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.232 [2024-12-05 12:22:37.836552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.232 [2024-12-05 12:22:37.836581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.232 [2024-12-05 12:22:37.840883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.232 [2024-12-05 12:22:37.840920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.232 [2024-12-05 12:22:37.840947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.232 [2024-12-05 12:22:37.845804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.232 [2024-12-05 12:22:37.845841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.232 [2024-12-05 12:22:37.845869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.232 [2024-12-05 12:22:37.848481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.232 [2024-12-05 12:22:37.848535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.232 [2024-12-05 12:22:37.848562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.232 [2024-12-05 12:22:37.852929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.232 [2024-12-05 12:22:37.852965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.232 [2024-12-05 12:22:37.852993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.232 [2024-12-05 12:22:37.856221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.232 [2024-12-05 12:22:37.856257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.232 [2024-12-05 12:22:37.856284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.232 [2024-12-05 12:22:37.860066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.232 [2024-12-05 12:22:37.860102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.232 [2024-12-05 12:22:37.860129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.232 [2024-12-05 12:22:37.864030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.232 [2024-12-05 12:22:37.864067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.232 [2024-12-05 12:22:37.864095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.232 [2024-12-05 12:22:37.868018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.232 [2024-12-05 12:22:37.868055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.232 [2024-12-05 12:22:37.868082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.232 [2024-12-05 12:22:37.871680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.232 [2024-12-05 12:22:37.871718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.232 [2024-12-05 12:22:37.871745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.232 [2024-12-05 12:22:37.875347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.232 [2024-12-05 12:22:37.875386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.232 [2024-12-05 12:22:37.875398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.879438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.879491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.879503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.882553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.882586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.882612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.886510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.886548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.886576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.890820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.890857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.890884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.893902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.893938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.893965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.897844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.897880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.897907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.902265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.902313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.902341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.906532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.906569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.906596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.910913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.910951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.910978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.914939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.914973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.915000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.918158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.918191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.918218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.922033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.922068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.922095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.925721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.925757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.925784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.929042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.929078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.929106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.932870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.932906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.932934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.936702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.936739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.936766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.939634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.939670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.939697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.943566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.943619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.943631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.947222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.947290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.947304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.951144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.951179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.951206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.955396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.955449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.955461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.958153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.958185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.958212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.962308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.962344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.962371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.967182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.967220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.967256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.971147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.971181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.971208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.973932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.973966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.973992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.233 [2024-12-05 12:22:37.977867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.233 [2024-12-05 12:22:37.977904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.233 [2024-12-05 12:22:37.977931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:37.982099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:37.982136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:37.982162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:37.986493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:37.986531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:37.986558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:37.990429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:37.990465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:37.990492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:37.993421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:37.993457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:37.993484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:37.997686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:37.997723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:37.997750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.002055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.002091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.002118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.006251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.006299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.006328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.009246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.009305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.009318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.013231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.013267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.013303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.017751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.017804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.017817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.021508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.021545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.021572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.024549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.024601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.024614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.028636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.028704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.028729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.031943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.031980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.032007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.035914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.035950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.035977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.040503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.040556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.040568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.043410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.043464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.043476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.047477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.047518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.047530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.051752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.051788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.051815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.055671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.055706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.055733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.059359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.059412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.059424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.062873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.062906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.062933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.066567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.066603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.066630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.069634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.069699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.069726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.073988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.074025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.074052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.077179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.077216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.077243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.081219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.234 [2024-12-05 12:22:38.081253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.234 [2024-12-05 12:22:38.081281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.234 [2024-12-05 12:22:38.084588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.235 [2024-12-05 12:22:38.084626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.235 [2024-12-05 12:22:38.084653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.235 [2024-12-05 12:22:38.088427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.235 [2024-12-05 12:22:38.088479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.235 [2024-12-05 12:22:38.088506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.235 [2024-12-05 12:22:38.091787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.235 [2024-12-05 12:22:38.091825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.235 [2024-12-05 12:22:38.091852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.235 [2024-12-05 12:22:38.095439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.235 [2024-12-05 12:22:38.095492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.235 [2024-12-05 12:22:38.095503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.099735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.099776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.099803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.104011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.104048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.104075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.107458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.107511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.107523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.111344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.111397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.111409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.114479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.114529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.114541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.118848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.118884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.118911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.122262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.122324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.122336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.126164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.126201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.126227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.129512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.129549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.129576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.133235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.133272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.133311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.136929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.136967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.136994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.140705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.140741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.140768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.144697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.144734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.144762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.495 [2024-12-05 12:22:38.148704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.495 [2024-12-05 12:22:38.148756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.495 [2024-12-05 12:22:38.148784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.152271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.152362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.152375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.156588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.156643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.156656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.161067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.161120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.161148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.164714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.164767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.164795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.168996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.169047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.169074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.173506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.173561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.173589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.178342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.178407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.178435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.182743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.182794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.182822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.186052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.186103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.186132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.190552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.190603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.190630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.195195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.195313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.195328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.198809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.198859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.198887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.202263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.202340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.202368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.206340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.206392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.206420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.496 8018.00 IOPS, 1002.25 MiB/s [2024-12-05T12:22:38.365Z] [2024-12-05 12:22:38.211996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.212049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.212077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.216638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.216690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.216717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.221033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.221085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.221112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.224199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.224250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.224277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.228103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.228156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.228182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.232430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.232481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.232508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.236333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.236384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.236411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.239324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.239376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.239388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.243377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.243431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.243443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.247751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.247801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.247829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.251712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.251743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.251770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.255324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.255376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.255389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.259002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.496 [2024-12-05 12:22:38.259051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.496 [2024-12-05 12:22:38.259078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.496 [2024-12-05 12:22:38.262530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.262579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.262607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.265697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.265748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.265775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.269670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.269736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.269763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.274042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.274092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.274119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.278679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.278732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.278759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.281687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.281737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.281750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.285881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.285933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.285960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.290551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.290609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.290638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.294941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.294992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.295019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.299222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.299304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.299319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.302012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.302060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.302087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.306054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.306105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.306133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.311023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.311075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.311103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.314028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.314077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.314104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.318077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.318131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.318143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.322460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.322512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.322540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.326965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.327018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.327046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.331352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.331405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.331417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.335727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.335778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.335806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.338912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.338962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.338989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.343087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.343138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.343165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.347650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.347703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.347732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.352262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.352343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.352357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.356416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.356470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.356482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.497 [2024-12-05 12:22:38.359981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.497 [2024-12-05 12:22:38.360016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.497 [2024-12-05 12:22:38.360043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.364679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.364716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.364743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.369308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.369357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.369369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.372347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.372382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.372408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.376406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.376441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.376469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.380524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.380561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.380589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.383841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.383877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.383904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.388456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.388494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.388521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.392305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.392341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.392369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.395549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.395617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.395644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.399631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.399667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.399694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.403094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.403129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.403156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.407889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.407927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.407954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.410970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.411004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.411031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.414775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.414808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.414835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.419028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.419063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.419090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.423491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.423528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.758 [2024-12-05 12:22:38.423542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.758 [2024-12-05 12:22:38.426554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.758 [2024-12-05 12:22:38.426587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.426614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.430608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.430644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.430671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.434499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.434536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.434563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.438090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.438127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.438154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.442404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.442440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.442467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.446188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.446252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.446278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.449554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.449607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.449618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.453232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.453269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.453307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.457152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.457189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.457216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.460292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.460327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.460354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.464011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.464049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.464076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.467116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.467149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.467176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.471379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.471433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.471445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.475396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.475434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.475446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.479135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.479170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.479197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.482037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.482072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.482099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.485782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.485818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.485845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.490003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.490038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.490065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.494179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.494215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.494242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.497781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.497818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.497845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.501447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.501501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.501513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.505659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.505695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.505723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.508748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.508786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.508813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.512619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.512656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.512684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.517268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.517331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.517342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.520249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.520295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.520323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.524645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.524680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.759 [2024-12-05 12:22:38.524707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.759 [2024-12-05 12:22:38.528017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.759 [2024-12-05 12:22:38.528054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.528082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.532061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.532098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.532126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.535500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.535556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.535584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.539189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.539224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.539276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.543151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.543187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.543214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.546937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.546972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.547000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.550232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.550267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.550306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.554336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.554372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.554399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.558747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.558784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.558811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.562856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.562892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.562919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.565732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.565768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.565795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.570101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.570138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.570165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.573143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.573179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.573207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.577168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.577205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.577232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.581486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.581540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.581552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.585856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.585893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.585921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.590068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.590103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.590130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.592806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.592842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.592869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.596931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.596968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.596996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.600149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.600185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.600214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.604814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.604851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.604878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.608244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.608305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.608318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.611415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.611470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.611482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.614885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.614919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.614946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.619132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.619167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.619195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:07.760 [2024-12-05 12:22:38.623200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:07.760 [2024-12-05 12:22:38.623259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.760 [2024-12-05 12:22:38.623306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.626465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.626500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.626527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.629877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.629930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.629958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.634243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.634309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.634323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.637766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.637819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.637847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.641626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.641678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.641719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.644814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.644850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.644877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.649316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.649366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.649378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.653574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.653628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.653640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.657728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.657762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.657789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.660835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.660871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.660899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.664559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.664597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.664624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.668583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.668621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.668649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.671373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.671425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.671437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.675579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.675613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.675640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.679771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.679806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.679833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.684093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.022 [2024-12-05 12:22:38.684127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.022 [2024-12-05 12:22:38.684154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.022 [2024-12-05 12:22:38.687686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.687722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.687749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.690625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.690688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.690715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.695291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.695341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.695353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.698290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.698335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.698346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.702224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.702260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.702288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.706675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.706709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.706736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.710998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.711034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.711062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.714176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.714209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.714237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.718080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.718117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.718145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.722199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.722234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.722262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.726013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.726049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.726076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.729764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.729801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.729828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.733215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.733253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.733280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.737119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.737156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.737184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.740510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.740547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.740574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.744860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.744914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.744941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.749381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.749435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.749462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.753624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.753678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.753706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.756353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.756407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.756435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.760483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.760536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.023 [2024-12-05 12:22:38.760564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.023 [2024-12-05 12:22:38.764646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.023 [2024-12-05 12:22:38.764699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.764726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.768879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.768931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.768959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.772627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.772677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.772704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.776247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.776296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.776325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.780433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.780470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.780498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.783917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.783953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.783980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.787623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.787659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.787687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.791644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.791681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.791708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.795780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.795817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.795845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.798466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.798516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.798527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.802776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.802812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.802840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.807277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.807326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.807339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.810930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.810964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.810991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.814233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.814269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.814307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.818111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.818149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.818176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.821682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.821721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.821749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.824933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.824970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.824997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.829134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.829204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.829246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.833545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.833597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.833624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.837942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.837992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.838020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.842365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.842417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.842446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.847433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.024 [2024-12-05 12:22:38.847490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.024 [2024-12-05 12:22:38.847503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.024 [2024-12-05 12:22:38.852482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.025 [2024-12-05 12:22:38.852539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.025 [2024-12-05 12:22:38.852568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.025 [2024-12-05 12:22:38.857032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.025 [2024-12-05 12:22:38.857068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.025 [2024-12-05 12:22:38.857096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.025 [2024-12-05 12:22:38.860163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.025 [2024-12-05 12:22:38.860200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.025 [2024-12-05 12:22:38.860227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.025 [2024-12-05 12:22:38.864238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.025 [2024-12-05 12:22:38.864273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.025 [2024-12-05 12:22:38.864328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.025 [2024-12-05 12:22:38.869062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.025 [2024-12-05 12:22:38.869100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.025 [2024-12-05 12:22:38.869127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.025 [2024-12-05 12:22:38.872837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.025 [2024-12-05 12:22:38.872874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.025 [2024-12-05 12:22:38.872901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.025 [2024-12-05 12:22:38.876900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.025 [2024-12-05 12:22:38.876937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.025 [2024-12-05 12:22:38.876964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.025 [2024-12-05 12:22:38.881225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.025 [2024-12-05 12:22:38.881262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.025 [2024-12-05 12:22:38.881290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.025 [2024-12-05 12:22:38.884358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.025 [2024-12-05 12:22:38.884394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.025 [2024-12-05 12:22:38.884420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.286 [2024-12-05 12:22:38.888769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.286 [2024-12-05 12:22:38.888805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.286 [2024-12-05 12:22:38.888832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.286 [2024-12-05 12:22:38.893317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.286 [2024-12-05 12:22:38.893353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.286 [2024-12-05 12:22:38.893380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.286 [2024-12-05 12:22:38.897008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.286 [2024-12-05 12:22:38.897045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.286 [2024-12-05 12:22:38.897072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.286 [2024-12-05 12:22:38.900572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.286 [2024-12-05 12:22:38.900609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.286 [2024-12-05 12:22:38.900637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.286 [2024-12-05 12:22:38.903907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.286 [2024-12-05 12:22:38.903945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.903972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.907803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.907840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.907867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.911544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.911581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.911618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.915213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.915290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.915308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.918251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.918311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.918324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.921956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.921993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.922021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.925995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.926033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.926060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.929584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.929621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.929649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.932895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.932932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.932959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.937055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.937093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.937121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.941243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.941305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.941317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.943964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.944015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.944043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.948166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.948202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.948229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.952645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.952683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.952710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.956069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.956106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.956133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.959697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.959734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.959761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.963715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.963751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.963779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.967878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.967915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.967942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.970622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.970655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.970683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.974978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.975014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.975042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.978742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.978777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.978805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.981633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.981670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.981698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.985439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.985476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.985504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.989606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.989643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.989671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.994010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.994047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.994074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:38.997894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:38.997945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:38.997972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:39.000942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:39.000993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:39.001019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.287 [2024-12-05 12:22:39.004878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.287 [2024-12-05 12:22:39.004931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.287 [2024-12-05 12:22:39.004959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.009347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.009398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.009410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.013805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.013857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.013885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.016822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.016874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.016901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.020784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.020836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.020864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.024772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.024824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.024851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.029155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.029192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.029220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.032248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.032296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.032324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.036260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.036322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.036350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.040770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.040823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.040851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.044616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.044671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.044699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.047667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.047718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.047744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.051951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.052020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.052048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.056955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.057011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.057040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.061933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.061970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.061997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.066136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.066171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.066199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.070405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.070458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.070486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.073536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.073589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.073601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.077357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.077416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.077427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.081263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.081325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.081337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.085134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.085172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.085199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.089060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.089097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.089125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.092742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.092780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.092807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.096065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.096102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.096128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.099804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.099842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.099869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.103720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.103757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.103784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.107137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.107171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.288 [2024-12-05 12:22:39.107199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.288 [2024-12-05 12:22:39.111140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.288 [2024-12-05 12:22:39.111173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.289 [2024-12-05 12:22:39.111201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.289 [2024-12-05 12:22:39.114640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.289 [2024-12-05 12:22:39.114674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.289 [2024-12-05 12:22:39.114702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.289 [2024-12-05 12:22:39.117842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.289 [2024-12-05 12:22:39.117879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.289 [2024-12-05 12:22:39.117906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.289 [2024-12-05 12:22:39.122187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.289 [2024-12-05 12:22:39.122223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.289 [2024-12-05 12:22:39.122251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.289 [2024-12-05 12:22:39.125172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.289 [2024-12-05 12:22:39.125209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.289 [2024-12-05 12:22:39.125236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.289 [2024-12-05 12:22:39.128866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.289 [2024-12-05 12:22:39.128904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.289 [2024-12-05 12:22:39.128931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.289 [2024-12-05 12:22:39.132603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.289 [2024-12-05 12:22:39.132640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.289 [2024-12-05 12:22:39.132667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.289 [2024-12-05 12:22:39.136652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.289 [2024-12-05 12:22:39.136688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.289 [2024-12-05 12:22:39.136716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.289 [2024-12-05 12:22:39.140432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.289 [2024-12-05 12:22:39.140469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.289 [2024-12-05 12:22:39.140496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.289 [2024-12-05 12:22:39.143684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.289 [2024-12-05 12:22:39.143721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.289 [2024-12-05 12:22:39.143749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.289 [2024-12-05 12:22:39.147441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.289 [2024-12-05 12:22:39.147494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.289 [2024-12-05 12:22:39.147506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.289 [2024-12-05 12:22:39.151106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.289 [2024-12-05 12:22:39.151141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.289 [2024-12-05 12:22:39.151168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.548 [2024-12-05 12:22:39.154942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.548 [2024-12-05 12:22:39.154978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.548 [2024-12-05 12:22:39.155005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.548 [2024-12-05 12:22:39.158429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.548 [2024-12-05 12:22:39.158480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.158492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.549 [2024-12-05 12:22:39.161699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.161736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.161763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.549 [2024-12-05 12:22:39.165596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.165634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.165661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.549 [2024-12-05 12:22:39.169737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.169773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.169800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.549 [2024-12-05 12:22:39.173988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.174025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.174052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.549 [2024-12-05 12:22:39.178498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.178551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.178563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.549 [2024-12-05 12:22:39.182543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.182596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.182608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.549 [2024-12-05 12:22:39.185244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.185306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.185318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.549 [2024-12-05 12:22:39.189594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.189632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.189659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.549 [2024-12-05 12:22:39.193888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.193925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.193952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.549 [2024-12-05 12:22:39.196877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.196912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.196939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:08.549 [2024-12-05 12:22:39.200753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.200790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.200817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:08.549 [2024-12-05 12:22:39.204729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.204765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.204792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:08.549 8010.50 IOPS, 1001.31 MiB/s [2024-12-05T12:22:39.418Z] [2024-12-05 12:22:39.210355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2190e00) 00:19:08.549 [2024-12-05 12:22:39.210377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:08.549 [2024-12-05 12:22:39.210388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:08.549 00:19:08.549 Latency(us) 00:19:08.549 [2024-12-05T12:22:39.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.549 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:08.549 nvme0n1 : 2.00 8010.11 1001.26 0.00 0.00 1993.94 647.91 6047.19 00:19:08.549 [2024-12-05T12:22:39.418Z] =================================================================================================================== 00:19:08.549 [2024-12-05T12:22:39.418Z] Total : 8010.11 1001.26 0.00 0.00 1993.94 647.91 6047.19 00:19:08.549 { 00:19:08.549 "results": [ 00:19:08.549 { 00:19:08.549 "job": "nvme0n1", 00:19:08.549 "core_mask": "0x2", 00:19:08.549 "workload": "randread", 00:19:08.549 "status": "finished", 00:19:08.549 "queue_depth": 16, 00:19:08.549 "io_size": 131072, 00:19:08.549 "runtime": 2.002094, 00:19:08.549 "iops": 8010.113411258412, 00:19:08.549 "mibps": 1001.2641764073015, 00:19:08.549 "io_failed": 0, 00:19:08.549 "io_timeout": 0, 00:19:08.549 "avg_latency_us": 1993.9386604839945, 00:19:08.549 "min_latency_us": 647.9127272727272, 00:19:08.549 "max_latency_us": 6047.185454545454 00:19:08.549 } 00:19:08.549 ], 00:19:08.549 "core_count": 1 00:19:08.549 } 00:19:08.549 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:08.549 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:08.549 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:08.549 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:08.549 | .driver_specific 00:19:08.549 | .nvme_error 00:19:08.549 | .status_code 00:19:08.549 | .command_transient_transport_error' 00:19:08.808 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 518 > 0 )) 00:19:08.808 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93915 00:19:08.808 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93915 ']' 00:19:08.808 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93915 00:19:08.808 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:08.808 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.808 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93915 00:19:08.808 killing process with pid 93915 00:19:08.808 Received shutdown signal, test time was about 2.000000 seconds 00:19:08.808 00:19:08.808 Latency(us) 00:19:08.808 [2024-12-05T12:22:39.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.808 [2024-12-05T12:22:39.677Z] =================================================================================================================== 00:19:08.808 [2024-12-05T12:22:39.677Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:08.808 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:08.808 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:08.808 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93915' 00:19:08.808 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93915 00:19:08.808 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93915 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93988 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93988 /var/tmp/bperf.sock 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93988 ']' 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:09.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.067 12:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:09.067 [2024-12-05 12:22:39.836770] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:19:09.067 [2024-12-05 12:22:39.836876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93988 ] 00:19:09.326 [2024-12-05 12:22:39.980734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.326 [2024-12-05 12:22:40.029933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.326 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.326 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:09.326 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:09.326 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:09.585 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:09.585 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.585 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:09.585 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.585 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:09.585 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:10.153 nvme0n1 00:19:10.153 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:10.153 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.153 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:10.153 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.153 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:10.153 12:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:10.153 Running I/O for 2 seconds... 00:19:10.153 [2024-12-05 12:22:40.852851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef3a28 00:19:10.153 [2024-12-05 12:22:40.853788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.153 [2024-12-05 12:22:40.853850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:10.153 [2024-12-05 12:22:40.864408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee2c28 00:19:10.153 [2024-12-05 12:22:40.865827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.153 [2024-12-05 12:22:40.865861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:10.153 [2024-12-05 12:22:40.871188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee3498 00:19:10.153 [2024-12-05 12:22:40.871919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.153 [2024-12-05 12:22:40.871968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:10.153 [2024-12-05 12:22:40.883092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eefae0 00:19:10.153 [2024-12-05 12:22:40.884178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.153 [2024-12-05 12:22:40.884228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:10.153 [2024-12-05 12:22:40.892620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eff3c8 00:19:10.153 [2024-12-05 12:22:40.894151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.153 [2024-12-05 12:22:40.894201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:10.153 [2024-12-05 12:22:40.902097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efeb58 00:19:10.153 [2024-12-05 12:22:40.902936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.153 [2024-12-05 12:22:40.903013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:10.153 [2024-12-05 12:22:40.913919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eef270 00:19:10.153 [2024-12-05 12:22:40.915153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.153 [2024-12-05 12:22:40.915186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:10.153 [2024-12-05 12:22:40.922999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef35f0 00:19:10.153 [2024-12-05 12:22:40.924543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.153 [2024-12-05 12:22:40.924580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:10.153 [2024-12-05 12:22:40.932566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeaef0 00:19:10.153 [2024-12-05 12:22:40.933576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.154 [2024-12-05 12:22:40.933623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:10.154 [2024-12-05 12:22:40.942233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee0a68 00:19:10.154 [2024-12-05 12:22:40.942965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.154 [2024-12-05 12:22:40.943015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:10.154 [2024-12-05 12:22:40.951503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ede470 00:19:10.154 [2024-12-05 12:22:40.952527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.154 [2024-12-05 12:22:40.952575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:10.154 [2024-12-05 12:22:40.962937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef1ca0 00:19:10.154 [2024-12-05 12:22:40.964467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.154 [2024-12-05 12:22:40.964517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:10.154 [2024-12-05 12:22:40.969725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef8a50 00:19:10.154 [2024-12-05 12:22:40.970500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.154 [2024-12-05 12:22:40.970563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:10.154 [2024-12-05 12:22:40.981058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee5a90 00:19:10.154 [2024-12-05 12:22:40.982230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.154 [2024-12-05 12:22:40.982262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:10.154 [2024-12-05 12:22:40.989521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef92c0 00:19:10.154 [2024-12-05 12:22:40.990861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.154 [2024-12-05 12:22:40.990892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:10.154 [2024-12-05 12:22:40.998711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee0630 00:19:10.154 [2024-12-05 12:22:40.999754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.154 [2024-12-05 12:22:40.999787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:10.154 [2024-12-05 12:22:41.007525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efd208 00:19:10.154 [2024-12-05 12:22:41.008691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.154 [2024-12-05 12:22:41.008723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:10.154 [2024-12-05 12:22:41.017650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee0630 00:19:10.154 [2024-12-05 12:22:41.018870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.154 [2024-12-05 12:22:41.018901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.028170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efda78 00:19:10.414 [2024-12-05 12:22:41.029374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.029428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.036487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efc128 00:19:10.414 [2024-12-05 12:22:41.037281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.037351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.046080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef2510 00:19:10.414 [2024-12-05 12:22:41.046622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.046658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.055421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee9e10 00:19:10.414 [2024-12-05 12:22:41.056249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.056317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.066754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee49b0 00:19:10.414 [2024-12-05 12:22:41.068089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.068122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.073553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efb8b8 00:19:10.414 [2024-12-05 12:22:41.074135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.074197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.085046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef0350 00:19:10.414 [2024-12-05 12:22:41.086156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.086187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.096529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef4298 00:19:10.414 [2024-12-05 12:22:41.098004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.098036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.103437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef7100 00:19:10.414 [2024-12-05 12:22:41.104169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.104214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.114768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efb480 00:19:10.414 [2024-12-05 12:22:41.116040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.116073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.122154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee1b48 00:19:10.414 [2024-12-05 12:22:41.122900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.122947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.133571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef6020 00:19:10.414 [2024-12-05 12:22:41.134821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.134854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.142372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eea248 00:19:10.414 [2024-12-05 12:22:41.143745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.143779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.151538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016edf118 00:19:10.414 [2024-12-05 12:22:41.152456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.152503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.162562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee99d8 00:19:10.414 [2024-12-05 12:22:41.164083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.164117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.169352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eec840 00:19:10.414 [2024-12-05 12:22:41.170121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.170167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.180754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef1868 00:19:10.414 [2024-12-05 12:22:41.182046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.182077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.187641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016edf118 00:19:10.414 [2024-12-05 12:22:41.188200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.188232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.198948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efc560 00:19:10.414 [2024-12-05 12:22:41.200024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.200056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:10.414 [2024-12-05 12:22:41.207759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee73e0 00:19:10.414 [2024-12-05 12:22:41.209035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.414 [2024-12-05 12:22:41.209067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:10.415 [2024-12-05 12:22:41.217040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef6890 00:19:10.415 [2024-12-05 12:22:41.217900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.415 [2024-12-05 12:22:41.217931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:10.415 [2024-12-05 12:22:41.228448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ede8a8 00:19:10.415 [2024-12-05 12:22:41.229816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.415 [2024-12-05 12:22:41.229848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:10.415 [2024-12-05 12:22:41.236633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee6b70 00:19:10.415 [2024-12-05 12:22:41.237391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.415 [2024-12-05 12:22:41.237437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:10.415 [2024-12-05 12:22:41.245731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee4de8 00:19:10.415 [2024-12-05 12:22:41.247080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.415 [2024-12-05 12:22:41.247112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:10.415 [2024-12-05 12:22:41.255846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef6890 00:19:10.415 [2024-12-05 12:22:41.257096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.415 [2024-12-05 12:22:41.257128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:10.415 [2024-12-05 12:22:41.263222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef4b08 00:19:10.415 [2024-12-05 12:22:41.263971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.415 [2024-12-05 12:22:41.264019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:10.415 [2024-12-05 12:22:41.274755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee84c0 00:19:10.415 [2024-12-05 12:22:41.276050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.415 [2024-12-05 12:22:41.276099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:10.674 [2024-12-05 12:22:41.284191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efd208 00:19:10.674 [2024-12-05 12:22:41.285465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.674 [2024-12-05 12:22:41.285516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:10.674 [2024-12-05 12:22:41.293585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efe2e8 00:19:10.674 [2024-12-05 12:22:41.294630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.674 [2024-12-05 12:22:41.294661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:10.674 [2024-12-05 12:22:41.303080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eec408 00:19:10.674 [2024-12-05 12:22:41.304153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.674 [2024-12-05 12:22:41.304183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:10.674 [2024-12-05 12:22:41.312103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee23b8 00:19:10.674 [2024-12-05 12:22:41.313321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.674 [2024-12-05 12:22:41.313377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:10.674 [2024-12-05 12:22:41.321330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeb760 00:19:10.674 [2024-12-05 12:22:41.322241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.674 [2024-12-05 12:22:41.322270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:10.674 [2024-12-05 12:22:41.332718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ede8a8 00:19:10.674 [2024-12-05 12:22:41.334154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.674 [2024-12-05 12:22:41.334185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:10.674 [2024-12-05 12:22:41.340189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eed4e8 00:19:10.674 [2024-12-05 12:22:41.341117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.674 [2024-12-05 12:22:41.341147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:10.674 [2024-12-05 12:22:41.351611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efa7d8 00:19:10.674 [2024-12-05 12:22:41.353041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.674 [2024-12-05 12:22:41.353072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:10.674 [2024-12-05 12:22:41.358582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ede038 00:19:10.675 [2024-12-05 12:22:41.359342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.359411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.370240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee0ea0 00:19:10.675 [2024-12-05 12:22:41.371398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.371448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.379192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef4f40 00:19:10.675 [2024-12-05 12:22:41.380289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.380328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.388042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeff18 00:19:10.675 [2024-12-05 12:22:41.389089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.389120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.397175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eef6a8 00:19:10.675 [2024-12-05 12:22:41.398020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.398051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.408802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeb760 00:19:10.675 [2024-12-05 12:22:41.410162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.410193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.415781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef5be8 00:19:10.675 [2024-12-05 12:22:41.416431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.416493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.427354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efe720 00:19:10.675 [2024-12-05 12:22:41.428576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.428606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.436426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efac10 00:19:10.675 [2024-12-05 12:22:41.437587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.437619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.445673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eebb98 00:19:10.675 [2024-12-05 12:22:41.446591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.446654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.455349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef7970 00:19:10.675 [2024-12-05 12:22:41.455961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.456023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.464437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efdeb0 00:19:10.675 [2024-12-05 12:22:41.465506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.465537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.473629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeee38 00:19:10.675 [2024-12-05 12:22:41.474444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.474523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.485061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef5378 00:19:10.675 [2024-12-05 12:22:41.486425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.486472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.493970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef0ff8 00:19:10.675 [2024-12-05 12:22:41.495230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.495295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.503123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efd208 00:19:10.675 [2024-12-05 12:22:41.503978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.504024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.512350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeaef0 00:19:10.675 [2024-12-05 12:22:41.513051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.513098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.521070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee5a90 00:19:10.675 [2024-12-05 12:22:41.521682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.521716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.529347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef4f40 00:19:10.675 [2024-12-05 12:22:41.529932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.675 [2024-12-05 12:22:41.529964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:10.675 [2024-12-05 12:22:41.540446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efb8b8 00:19:10.937 [2024-12-05 12:22:41.541361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.937 [2024-12-05 12:22:41.541416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:10.937 [2024-12-05 12:22:41.549676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eec840 00:19:10.937 [2024-12-05 12:22:41.550423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.937 [2024-12-05 12:22:41.550471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:10.937 [2024-12-05 12:22:41.559807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee6b70 00:19:10.937 [2024-12-05 12:22:41.560670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.937 [2024-12-05 12:22:41.560702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:10.937 [2024-12-05 12:22:41.570924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eef270 00:19:10.937 [2024-12-05 12:22:41.572419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.937 [2024-12-05 12:22:41.572451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:10.937 [2024-12-05 12:22:41.577723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eee5c8 00:19:10.937 [2024-12-05 12:22:41.578468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.937 [2024-12-05 12:22:41.578514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:10.937 [2024-12-05 12:22:41.589095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ede8a8 00:19:10.937 [2024-12-05 12:22:41.590359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.937 [2024-12-05 12:22:41.590413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:10.937 [2024-12-05 12:22:41.597954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef7538 00:19:10.937 [2024-12-05 12:22:41.599353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.937 [2024-12-05 12:22:41.599404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:10.937 [2024-12-05 12:22:41.607227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef6890 00:19:10.937 [2024-12-05 12:22:41.608272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.937 [2024-12-05 12:22:41.608326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:10.937 [2024-12-05 12:22:41.616776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee0630 00:19:10.937 [2024-12-05 12:22:41.617547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.937 [2024-12-05 12:22:41.617595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:10.937 [2024-12-05 12:22:41.626035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef7100 00:19:10.937 [2024-12-05 12:22:41.627087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.937 [2024-12-05 12:22:41.627118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:10.937 [2024-12-05 12:22:41.634166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efef90 00:19:10.937 [2024-12-05 12:22:41.634870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.937 [2024-12-05 12:22:41.634917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:10.937 [2024-12-05 12:22:41.645667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef92c0 00:19:10.937 [2024-12-05 12:22:41.646735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.646766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.657141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee27f0 00:19:10.938 [2024-12-05 12:22:41.658578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.658624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.663966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eed0b0 00:19:10.938 [2024-12-05 12:22:41.664657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.664704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.675312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efc998 00:19:10.938 [2024-12-05 12:22:41.676520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.676551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.684101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee1f80 00:19:10.938 [2024-12-05 12:22:41.685421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.685468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.693327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee5220 00:19:10.938 [2024-12-05 12:22:41.694301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.694339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.702697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eee190 00:19:10.938 [2024-12-05 12:22:41.704003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.704036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.711900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeaef0 00:19:10.938 [2024-12-05 12:22:41.712884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.712917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.723263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee99d8 00:19:10.938 [2024-12-05 12:22:41.724809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.724840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.730056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efeb58 00:19:10.938 [2024-12-05 12:22:41.730815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.730861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.742187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eef6a8 00:19:10.938 [2024-12-05 12:22:41.743709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.743742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.748941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee8088 00:19:10.938 [2024-12-05 12:22:41.749728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.749790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.760185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef4298 00:19:10.938 [2024-12-05 12:22:41.761346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.761399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.769015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef2948 00:19:10.938 [2024-12-05 12:22:41.770209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.770241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.778597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef4b08 00:19:10.938 [2024-12-05 12:22:41.779584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.779648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.788459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef6cc8 00:19:10.938 [2024-12-05 12:22:41.789761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.789810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:10.938 [2024-12-05 12:22:41.796909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef1430 00:19:10.938 [2024-12-05 12:22:41.797671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.938 [2024-12-05 12:22:41.797734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.809659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef4298 00:19:11.198 [2024-12-05 12:22:41.810858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.810906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.818796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee6300 00:19:11.198 [2024-12-05 12:22:41.820119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.820169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.828098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeee38 00:19:11.198 [2024-12-05 12:22:41.829069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.829115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:11.198 26468.00 IOPS, 103.39 MiB/s [2024-12-05T12:22:42.067Z] [2024-12-05 12:22:41.841118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efe720 00:19:11.198 [2024-12-05 12:22:41.842611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.842675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.848341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ede470 00:19:11.198 [2024-12-05 12:22:41.849078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.849126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.860360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef57b0 00:19:11.198 [2024-12-05 12:22:41.861650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.861699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.869690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eef270 00:19:11.198 [2024-12-05 12:22:41.871182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.871230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.879589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee7818 00:19:11.198 [2024-12-05 12:22:41.880513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.880560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.891207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef7538 00:19:11.198 [2024-12-05 12:22:41.892562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.892609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.899848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eed4e8 00:19:11.198 [2024-12-05 12:22:41.901186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.901235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.909627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee84c0 00:19:11.198 [2024-12-05 12:22:41.910671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.910720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.922146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee8088 00:19:11.198 [2024-12-05 12:22:41.923784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.923836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.929369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef96f8 00:19:11.198 [2024-12-05 12:22:41.930193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.930238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.939305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef46d0 00:19:11.198 [2024-12-05 12:22:41.940269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.940343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.950798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efb8b8 00:19:11.198 [2024-12-05 12:22:41.952242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.198 [2024-12-05 12:22:41.952317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:11.198 [2024-12-05 12:22:41.958021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef3e60 00:19:11.199 [2024-12-05 12:22:41.958743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.199 [2024-12-05 12:22:41.958790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:11.199 [2024-12-05 12:22:41.969587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee5ec8 00:19:11.199 [2024-12-05 12:22:41.970788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.199 [2024-12-05 12:22:41.970818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:11.199 [2024-12-05 12:22:41.978083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ede038 00:19:11.199 [2024-12-05 12:22:41.979072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.199 [2024-12-05 12:22:41.979104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:11.199 [2024-12-05 12:22:41.988378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef31b8 00:19:11.199 [2024-12-05 12:22:41.989472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.199 [2024-12-05 12:22:41.989503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:11.199 [2024-12-05 12:22:41.995497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee6fa8 00:19:11.199 [2024-12-05 12:22:41.996141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.199 [2024-12-05 12:22:41.996202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:11.199 [2024-12-05 12:22:42.005514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef5378 00:19:11.199 [2024-12-05 12:22:42.006054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.199 [2024-12-05 12:22:42.006103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:11.199 [2024-12-05 12:22:42.014691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeb328 00:19:11.199 [2024-12-05 12:22:42.015569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.199 [2024-12-05 12:22:42.015603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:11.199 [2024-12-05 12:22:42.026095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efeb58 00:19:11.199 [2024-12-05 12:22:42.027475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.199 [2024-12-05 12:22:42.027526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:11.199 [2024-12-05 12:22:42.032951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016edf550 00:19:11.199 [2024-12-05 12:22:42.033590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.199 [2024-12-05 12:22:42.033638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.199 [2024-12-05 12:22:42.044412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efb048 00:19:11.199 [2024-12-05 12:22:42.045569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.199 [2024-12-05 12:22:42.045599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:11.199 [2024-12-05 12:22:42.053755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee73e0 00:19:11.199 [2024-12-05 12:22:42.054890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.199 [2024-12-05 12:22:42.054920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:11.199 [2024-12-05 12:22:42.062478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efb480 00:19:11.199 [2024-12-05 12:22:42.063616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.199 [2024-12-05 12:22:42.063650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.072319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016edf550 00:19:11.459 [2024-12-05 12:22:42.073098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.073146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.083490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eddc00 00:19:11.459 [2024-12-05 12:22:42.084935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.084966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.090341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef31b8 00:19:11.459 [2024-12-05 12:22:42.091005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.091051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.101770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef7970 00:19:11.459 [2024-12-05 12:22:42.102952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.102983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.109200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eefae0 00:19:11.459 [2024-12-05 12:22:42.109874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.109921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.121577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee7818 00:19:11.459 [2024-12-05 12:22:42.122994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.123026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.128450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef46d0 00:19:11.459 [2024-12-05 12:22:42.129122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.129168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.139977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee88f8 00:19:11.459 [2024-12-05 12:22:42.141152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.141184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.148845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016edece0 00:19:11.459 [2024-12-05 12:22:42.150131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.150192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.158087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef46d0 00:19:11.459 [2024-12-05 12:22:42.158921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.158983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.168445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eee5c8 00:19:11.459 [2024-12-05 12:22:42.169646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.169708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.177041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eefae0 00:19:11.459 [2024-12-05 12:22:42.178319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.178376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.186409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef35f0 00:19:11.459 [2024-12-05 12:22:42.187385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.187434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.196017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee0a68 00:19:11.459 [2024-12-05 12:22:42.196626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.196688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.204878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef0350 00:19:11.459 [2024-12-05 12:22:42.205387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.205423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.213186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeb760 00:19:11.459 [2024-12-05 12:22:42.213785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.213833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.224640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eee190 00:19:11.459 [2024-12-05 12:22:42.225744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.225775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.233487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efc560 00:19:11.459 [2024-12-05 12:22:42.234666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.234696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.242641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efcdd0 00:19:11.459 [2024-12-05 12:22:42.243424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.243490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.252170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee6738 00:19:11.459 [2024-12-05 12:22:42.252835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.252899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.261135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef92c0 00:19:11.459 [2024-12-05 12:22:42.261728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.261777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.272253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eea680 00:19:11.459 [2024-12-05 12:22:42.273641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.273704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.279169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016edfdc0 00:19:11.459 [2024-12-05 12:22:42.279909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.279956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.290836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee1b48 00:19:11.459 [2024-12-05 12:22:42.291953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.291986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.302344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee0630 00:19:11.459 [2024-12-05 12:22:42.303883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.459 [2024-12-05 12:22:42.303915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:11.459 [2024-12-05 12:22:42.309121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef8a50 00:19:11.460 [2024-12-05 12:22:42.309930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.460 [2024-12-05 12:22:42.309976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:11.460 [2024-12-05 12:22:42.320654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee9e10 00:19:11.460 [2024-12-05 12:22:42.321947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.460 [2024-12-05 12:22:42.321977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:11.719 [2024-12-05 12:22:42.330008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeaef0 00:19:11.719 [2024-12-05 12:22:42.331161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.719 [2024-12-05 12:22:42.331191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:11.719 [2024-12-05 12:22:42.339318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee7c50 00:19:11.719 [2024-12-05 12:22:42.340285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.719 [2024-12-05 12:22:42.340335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:11.719 [2024-12-05 12:22:42.348936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efb480 00:19:11.719 [2024-12-05 12:22:42.349662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.719 [2024-12-05 12:22:42.349709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:11.719 [2024-12-05 12:22:42.358980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeaef0 00:19:11.719 [2024-12-05 12:22:42.360264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.719 [2024-12-05 12:22:42.360323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:11.719 [2024-12-05 12:22:42.367926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeb760 00:19:11.719 [2024-12-05 12:22:42.369230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.719 [2024-12-05 12:22:42.369262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:11.719 [2024-12-05 12:22:42.377969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efb480 00:19:11.719 [2024-12-05 12:22:42.379162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.719 [2024-12-05 12:22:42.379192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:11.719 [2024-12-05 12:22:42.386889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eee190 00:19:11.720 [2024-12-05 12:22:42.388257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.388318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.396310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef96f8 00:19:11.720 [2024-12-05 12:22:42.397275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.397328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.405861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efeb58 00:19:11.720 [2024-12-05 12:22:42.406599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.406646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.414618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef8e88 00:19:11.720 [2024-12-05 12:22:42.415189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.415224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.422899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef1ca0 00:19:11.720 [2024-12-05 12:22:42.423539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.423620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.435146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efef90 00:19:11.720 [2024-12-05 12:22:42.436544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.436591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.443683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee5a90 00:19:11.720 [2024-12-05 12:22:42.444778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.444810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.453983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee12d8 00:19:11.720 [2024-12-05 12:22:42.455224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.455307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.462800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef1868 00:19:11.720 [2024-12-05 12:22:42.464133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.464167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.472203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efe720 00:19:11.720 [2024-12-05 12:22:42.473093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.473123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.481512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef7970 00:19:11.720 [2024-12-05 12:22:42.482246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.482320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.490923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efa7d8 00:19:11.720 [2024-12-05 12:22:42.492025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.492062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.500719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee6fa8 00:19:11.720 [2024-12-05 12:22:42.501583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.501647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.509613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee99d8 00:19:11.720 [2024-12-05 12:22:42.510294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.510353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.519720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efb8b8 00:19:11.720 [2024-12-05 12:22:42.520867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.520897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.528599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efac10 00:19:11.720 [2024-12-05 12:22:42.529888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.529919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.537808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee6300 00:19:11.720 [2024-12-05 12:22:42.538731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.538760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.549158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee27f0 00:19:11.720 [2024-12-05 12:22:42.550600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.550662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.556082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee99d8 00:19:11.720 [2024-12-05 12:22:42.556781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.556827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.567565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef4f40 00:19:11.720 [2024-12-05 12:22:42.568809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.568840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.576926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eedd58 00:19:11.720 [2024-12-05 12:22:42.578150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.720 [2024-12-05 12:22:42.578181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:11.720 [2024-12-05 12:22:42.585415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efbcf0 00:19:11.980 [2024-12-05 12:22:42.587035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-05 12:22:42.587066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:11.980 [2024-12-05 12:22:42.594115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef9f68 00:19:11.980 [2024-12-05 12:22:42.594712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-05 12:22:42.594762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:11.980 [2024-12-05 12:22:42.605695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efa3a0 00:19:11.980 [2024-12-05 12:22:42.606803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-05 12:22:42.606833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:11.980 [2024-12-05 12:22:42.615450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efd640 00:19:11.980 [2024-12-05 12:22:42.616437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-05 12:22:42.616494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:11.980 [2024-12-05 12:22:42.623986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efa3a0 00:19:11.980 [2024-12-05 12:22:42.624753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-05 12:22:42.624813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:11.980 [2024-12-05 12:22:42.634189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee5a90 00:19:11.980 [2024-12-05 12:22:42.635359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.980 [2024-12-05 12:22:42.635392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.643509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee84c0 00:19:11.981 [2024-12-05 12:22:42.644391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.644468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.652019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eedd58 00:19:11.981 [2024-12-05 12:22:42.652637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.652671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.663434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee9e10 00:19:11.981 [2024-12-05 12:22:42.664450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.664481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.671293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efcdd0 00:19:11.981 [2024-12-05 12:22:42.671943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.671989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.682936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efd208 00:19:11.981 [2024-12-05 12:22:42.684350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.684383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.689734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef6cc8 00:19:11.981 [2024-12-05 12:22:42.690501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.690564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.701166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eed4e8 00:19:11.981 [2024-12-05 12:22:42.702446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.702495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.710019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016efcdd0 00:19:11.981 [2024-12-05 12:22:42.711427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.711474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.719247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eec408 00:19:11.981 [2024-12-05 12:22:42.720326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.720365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.728606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016edfdc0 00:19:11.981 [2024-12-05 12:22:42.729661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.729692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.737123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee6fa8 00:19:11.981 [2024-12-05 12:22:42.738226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.738257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.746407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee9e10 00:19:11.981 [2024-12-05 12:22:42.747217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.747299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.757979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eef6a8 00:19:11.981 [2024-12-05 12:22:42.759371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.759421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.764988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee0ea0 00:19:11.981 [2024-12-05 12:22:42.765579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.765614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.776542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ee5a90 00:19:11.981 [2024-12-05 12:22:42.777640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.777670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.786289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef46d0 00:19:11.981 [2024-12-05 12:22:42.787593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.787665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.795963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eecc78 00:19:11.981 [2024-12-05 12:22:42.797191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.797223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.802786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ef5be8 00:19:11.981 [2024-12-05 12:22:42.803465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.803516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.814316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016ede8a8 00:19:11.981 [2024-12-05 12:22:42.815449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.815497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.823105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eeaab8 00:19:11.981 [2024-12-05 12:22:42.824381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.824445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:11.981 [2024-12-05 12:22:42.832346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcadfa0) with pdu=0x200016eed4e8 00:19:11.981 [2024-12-05 12:22:42.833240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:11.981 [2024-12-05 12:22:42.833270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:11.981 26554.00 IOPS, 103.73 MiB/s 00:19:11.981 Latency(us) 00:19:11.981 [2024-12-05T12:22:42.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.981 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:11.981 nvme0n1 : 2.00 26568.46 103.78 0.00 0.00 4812.96 1951.19 12928.47 00:19:11.981 [2024-12-05T12:22:42.850Z] =================================================================================================================== 00:19:11.981 [2024-12-05T12:22:42.850Z] Total : 26568.46 103.78 0.00 0.00 4812.96 1951.19 12928.47 00:19:11.981 { 00:19:11.981 "results": [ 00:19:11.981 { 00:19:11.981 "job": "nvme0n1", 00:19:11.981 "core_mask": "0x2", 00:19:11.981 "workload": "randwrite", 00:19:11.981 "status": "finished", 00:19:11.981 "queue_depth": 128, 00:19:11.981 "io_size": 4096, 00:19:11.981 "runtime": 2.003729, 00:19:11.981 "iops": 26568.463100549026, 00:19:11.981 "mibps": 103.78305898651963, 00:19:11.981 "io_failed": 0, 00:19:11.981 "io_timeout": 0, 00:19:11.981 "avg_latency_us": 4812.9618546574775, 00:19:11.981 "min_latency_us": 1951.1854545454546, 00:19:11.981 "max_latency_us": 12928.465454545454 00:19:11.981 } 00:19:11.981 ], 00:19:11.981 "core_count": 1 00:19:11.981 } 00:19:12.241 12:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:12.241 12:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:12.241 12:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:12.241 12:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:12.241 | .driver_specific 00:19:12.241 | .nvme_error 00:19:12.241 | .status_code 00:19:12.241 | .command_transient_transport_error' 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93988 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93988 ']' 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93988 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93988 00:19:12.500 killing process with pid 93988 00:19:12.500 Received shutdown signal, test time was about 2.000000 seconds 00:19:12.500 00:19:12.500 Latency(us) 00:19:12.500 [2024-12-05T12:22:43.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.500 [2024-12-05T12:22:43.369Z] =================================================================================================================== 00:19:12.500 [2024-12-05T12:22:43.369Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93988' 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93988 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93988 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94065 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94065 /var/tmp/bperf.sock 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:12.500 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94065 ']' 00:19:12.758 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:12.758 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.758 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:12.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:12.758 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.758 12:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:12.758 [2024-12-05 12:22:43.425858] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:19:12.759 [2024-12-05 12:22:43.425977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94065 ] 00:19:12.759 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:12.759 Zero copy mechanism will not be used. 00:19:12.759 [2024-12-05 12:22:43.563656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.759 [2024-12-05 12:22:43.615827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.703 12:22:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.703 12:22:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:19:13.703 12:22:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:13.703 12:22:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:13.980 12:22:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:13.980 12:22:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.980 12:22:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:13.980 12:22:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.980 12:22:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:13.980 12:22:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:14.256 nvme0n1 00:19:14.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:14.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:14.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:14.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:14.515 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:14.515 Zero copy mechanism will not be used. 00:19:14.515 Running I/O for 2 seconds... 00:19:14.515 [2024-12-05 12:22:45.145842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.145954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.145984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.151365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.151491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.151514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.156374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.156563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.156585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.161178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.161411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.161433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.166031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.166192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.166213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.170998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.171129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.171149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.175920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.176091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.176112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.180725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.180902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.180921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.185714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.185898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.185918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.190485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.190683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.190702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.195444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.195696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.195721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.200595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.200762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.200782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.205408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.205579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.205599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.210269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.210464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.210485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.515 [2024-12-05 12:22:45.215029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.515 [2024-12-05 12:22:45.215194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.515 [2024-12-05 12:22:45.215213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.219902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.220089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.220110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.224725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.224884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.224903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.229505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.229695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.229716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.234353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.234516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.234535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.239110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.239370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.239393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.244049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.244235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.244254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.248858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.249019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.249039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.253640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.253844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.253864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.258393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.258564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.258584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.263150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.263372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.263393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.268059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.268324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.268345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.272920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.273080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.273099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.277691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.277858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.277878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.282492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.282665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.282684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.287320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.287453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.287475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.292107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.292282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.292301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.296989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.297150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.297170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.301753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.301908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.301927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.306532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.306691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.306712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.311340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.311549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.311604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.316145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.316348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.316369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.320987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.321133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.321153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.325804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.325978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.325999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.330666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.330836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.330855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.335466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.335701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.335733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.340225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.340403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.516 [2024-12-05 12:22:45.340423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.516 [2024-12-05 12:22:45.345006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.516 [2024-12-05 12:22:45.345175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.517 [2024-12-05 12:22:45.345194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.517 [2024-12-05 12:22:45.349784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.517 [2024-12-05 12:22:45.349951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.517 [2024-12-05 12:22:45.349970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.517 [2024-12-05 12:22:45.354490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.517 [2024-12-05 12:22:45.354693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.517 [2024-12-05 12:22:45.354712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.517 [2024-12-05 12:22:45.359254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.517 [2024-12-05 12:22:45.359442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.517 [2024-12-05 12:22:45.359463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.517 [2024-12-05 12:22:45.364081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.517 [2024-12-05 12:22:45.364236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.517 [2024-12-05 12:22:45.364255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.517 [2024-12-05 12:22:45.368833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.517 [2024-12-05 12:22:45.368998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.517 [2024-12-05 12:22:45.369018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.517 [2024-12-05 12:22:45.373565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.517 [2024-12-05 12:22:45.373734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.517 [2024-12-05 12:22:45.373753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.517 [2024-12-05 12:22:45.378439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.517 [2024-12-05 12:22:45.378573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.517 [2024-12-05 12:22:45.378593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.777 [2024-12-05 12:22:45.384001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.777 [2024-12-05 12:22:45.384149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-05 12:22:45.384169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.777 [2024-12-05 12:22:45.389185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.777 [2024-12-05 12:22:45.389412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-05 12:22:45.389434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.777 [2024-12-05 12:22:45.394161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.777 [2024-12-05 12:22:45.394399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.777 [2024-12-05 12:22:45.394427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.777 [2024-12-05 12:22:45.399425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.399647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.399676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.404633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.404882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.404908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.409678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.409888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.409908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.414663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.414877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.414899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.419590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.419818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.419838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.424502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.424721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.424742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.429588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.429830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.429867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.434458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.434632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.434652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.439304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.439492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.439513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.444172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.444393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.444415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.449108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.449375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.449415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.454087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.454261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.454282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.459004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.459187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.459208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.463918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.464067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.464086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.468720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.469007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.469034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.473566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.473757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.473778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.478346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.478523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.478543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.483175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.483401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.483428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.488076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.488312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.488349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.492899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.493075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.493095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.497788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.497975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.497995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.502630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.502811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.502832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.507557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.507824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.507860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.512442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.512613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.512633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.517232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.517410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.517431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.522031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.522191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.522211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.778 [2024-12-05 12:22:45.526923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.778 [2024-12-05 12:22:45.527099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.778 [2024-12-05 12:22:45.527120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.531830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.532010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.532030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.536672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.536859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.536880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.541529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.541721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.541741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.546341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.546535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.546555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.551121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.551380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.551402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.556085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.556275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.556312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.560876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.561054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.561074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.565730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.565894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.565914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.570708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.570903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.570922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.575565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.575770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.575791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.580353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.580657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.580685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.585344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.585510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.585530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.590212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.590379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.590400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.595043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.595227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.595272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.599873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.600026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.600045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.604740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.604889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.604909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.609533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.609722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.609741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.614338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.614502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.614522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.619070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.619227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.619271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.623897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.624082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.624102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.628871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.629045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.629064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.633597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.633761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.633781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:14.779 [2024-12-05 12:22:45.638337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:14.779 [2024-12-05 12:22:45.638496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.779 [2024-12-05 12:22:45.638515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.643599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.643763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.643784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.648561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.648744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.648763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.653674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.653838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.653859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.658591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.658723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.658742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.663471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.663667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.663699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.668403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.668536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.668555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.673146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.673345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.673365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.678009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.678184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.678203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.682872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.683039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.683059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.687749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.687910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.687929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.692506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.692680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.692699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.697290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.697483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.697503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.702162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.702366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.039 [2024-12-05 12:22:45.702386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.039 [2024-12-05 12:22:45.706955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.039 [2024-12-05 12:22:45.707103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.707122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.711777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.711935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.711954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.716668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.716815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.716834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.721362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.721534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.721555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.726118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.726370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.726391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.730840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.731010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.731030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.735731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.735897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.735917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.740476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.740636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.740656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.745326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.745522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.745543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.750114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.750317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.750354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.755012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.755162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.755181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.759899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.760071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.760091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.764711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.764883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.764903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.769530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.769720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.769740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.774326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.774505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.774525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.779091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.779322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.779344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.783919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.784090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.784109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.788776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.788921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.788941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.793605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.793765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.793785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.798402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.798551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.798571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.803011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.803211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.803230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.807875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.808035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.808054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.812715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.812886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.812906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.817434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.817567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.817588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.822218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.822409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.822429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.827070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.827226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.827270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.831898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.832067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.832087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.040 [2024-12-05 12:22:45.836683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.040 [2024-12-05 12:22:45.836815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.040 [2024-12-05 12:22:45.836834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.841412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.841584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.841604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.846227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.846397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.846417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.850925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.851093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.851112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.855867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.856000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.856019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.860629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.860787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.860812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.865451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.865633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.865653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.870322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.870452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.870472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.875043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.875215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.875235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.879879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.880049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.880068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.884682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.884843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.884863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.889494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.889666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.889699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.894315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.894479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.894499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.899132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.899295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.899332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.041 [2024-12-05 12:22:45.904196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.041 [2024-12-05 12:22:45.904379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.041 [2024-12-05 12:22:45.904399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.909233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.909439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.909459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.914231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.914409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.914429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.919032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.919207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.919227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.923904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.924057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.924077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.928750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.928914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.928933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.933593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.933779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.933799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.938342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.938472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.938491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.943219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.943423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.943449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.948016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.948169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.948189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.952831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.953000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.953019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.957630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.957846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.957866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.962424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.962542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.962562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.967149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.967372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.967393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.971954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.972122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.972141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.976722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.976877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.976897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.981466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.981636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.981656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.986290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.986451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.986471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.990982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.991144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.991163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:45.995899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:45.996080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:45.996105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:46.000886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:46.001055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:46.001076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:46.005879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:46.006074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:46.006110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:46.010957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:46.011126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:46.011153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:46.016087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.301 [2024-12-05 12:22:46.016308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.301 [2024-12-05 12:22:46.016361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.301 [2024-12-05 12:22:46.021075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.021243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.021265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.026094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.026267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.026302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.031046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.031214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.031233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.036050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.036210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.036229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.040968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.041138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.041159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.045838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.046006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.046025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.050564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.050721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.050742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.055352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.055529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.055550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.060217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.060404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.060424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.065089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.065248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.065267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.069892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.070056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.070076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.074687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.074856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.074875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.079509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.079716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.079736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.084262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.084435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.084454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.089094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.089253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.089273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.093850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.093979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.093999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.098699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.098882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.098902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.103532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.103703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.103723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.108252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.108434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.108454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.113006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.113191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.113210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.117813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.117969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.117988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.122738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.122903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.122923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.127572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.127759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.127779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.132310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.132478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.132497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.137089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.137258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.137304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.302 [2024-12-05 12:22:46.141849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.302 [2024-12-05 12:22:46.142016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.302 [2024-12-05 12:22:46.142035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.302 6370.00 IOPS, 796.25 MiB/s [2024-12-05T12:22:46.171Z] [2024-12-05 12:22:46.147891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.303 [2024-12-05 12:22:46.148067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.303 [2024-12-05 12:22:46.148087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.303 [2024-12-05 12:22:46.152730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.303 [2024-12-05 12:22:46.152898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.303 [2024-12-05 12:22:46.152918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.303 [2024-12-05 12:22:46.157531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.303 [2024-12-05 12:22:46.157726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.303 [2024-12-05 12:22:46.157745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.303 [2024-12-05 12:22:46.162376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.303 [2024-12-05 12:22:46.162537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.303 [2024-12-05 12:22:46.162557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.562 [2024-12-05 12:22:46.167658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.167809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.167828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.172751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.172903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.172922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.177647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.177828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.177848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.182609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.182787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.182805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.187449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.187638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.187684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.192286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.192483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.192507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.197104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.197263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.197309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.201955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.202113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.202131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.206822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.206952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.206972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.211622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.211779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.211798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.216415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.216549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.216569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.221163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.221409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.221429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.225988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.226157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.226177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.230861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.231028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.231048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.235681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.235828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.235848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.240431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.240567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.240586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.245207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.245402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.245423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.249961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.250120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.250139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.254754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.254904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.254925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.259526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.259739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.259758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.264444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.264636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.264657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.269151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.269343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.269376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.274035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.274224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.274244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.278863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.279034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.279054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.283816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.283997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.284017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.288611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.288804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.288825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.293406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.563 [2024-12-05 12:22:46.293582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.563 [2024-12-05 12:22:46.293602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.563 [2024-12-05 12:22:46.298241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.298444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.298465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.302972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.303146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.303165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.307810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.307973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.307992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.312546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.312715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.312734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.317429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.317543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.317563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.322219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.322407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.322429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.327054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.327309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.327361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.331821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.331991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.332011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.336634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.336819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.336838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.341459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.341615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.341635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.346205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.346421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.346441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.351031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.351188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.351208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.355891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.356050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.356070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.360646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.360801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.360820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.365448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.365607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.365626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.370247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.370431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.370451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.375008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.375182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.375201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.379842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.380000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.380019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.384566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.384697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.384717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.389393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.389526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.389546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.394160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.394370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.394391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.398921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.399092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.399112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.403713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.403888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.403907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.408532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.408692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.408711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.413259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.413430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.413450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.418066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.418232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.418252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.422911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.423077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.564 [2024-12-05 12:22:46.423097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.564 [2024-12-05 12:22:46.428080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.564 [2024-12-05 12:22:46.428266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.565 [2024-12-05 12:22:46.428286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.433174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.433382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.433403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.438250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.438457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.438494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.443075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.443313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.443350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.447966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.448113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.448132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.452855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.452987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.453007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.457695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.457862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.457882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.462475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.462660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.462695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.467466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.467663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.467682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.472323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.472489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.472509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.477086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.477243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.477262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.481953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.482123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.482143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.486773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.486937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.486957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.491675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.491809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.491829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.496472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.496588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.496607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.501250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.501424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.501444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.506014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.506181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.506200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.510930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.511112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.511132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.515925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.516095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.516114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.520688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.520856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.520875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.525398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.825 [2024-12-05 12:22:46.525531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.825 [2024-12-05 12:22:46.525551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.825 [2024-12-05 12:22:46.530214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.530425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.530451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.534991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.535150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.535170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.539880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.540034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.540053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.544721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.544880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.544900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.549499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.549661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.549680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.554299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.554472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.554493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.559052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.559222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.559283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.563835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.564009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.564029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.568648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.568806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.568826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.573426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.573593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.573613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.578220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.578409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.578429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.583017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.583186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.583205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.587908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.588097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.588118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.592804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.593002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.593029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.597639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.597838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.597858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.602712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.602929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.602955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.607721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.607910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.607931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.612769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.612948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.612968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.617751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.617926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.617946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.622585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.622791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.622812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.627489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.627830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.627851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.632513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.632692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.632713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.637331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.637479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.637498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.642113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.642288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.642336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.646993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.647169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.647189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.651825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.652020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.652041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.656562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.826 [2024-12-05 12:22:46.656732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.826 [2024-12-05 12:22:46.656752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.826 [2024-12-05 12:22:46.661459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.827 [2024-12-05 12:22:46.661635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.827 [2024-12-05 12:22:46.661656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.827 [2024-12-05 12:22:46.666252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.827 [2024-12-05 12:22:46.666450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.827 [2024-12-05 12:22:46.666471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.827 [2024-12-05 12:22:46.671165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.827 [2024-12-05 12:22:46.671452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.827 [2024-12-05 12:22:46.671481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:15.827 [2024-12-05 12:22:46.675994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.827 [2024-12-05 12:22:46.676182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.827 [2024-12-05 12:22:46.676203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:15.827 [2024-12-05 12:22:46.680871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.827 [2024-12-05 12:22:46.681046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.827 [2024-12-05 12:22:46.681066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:15.827 [2024-12-05 12:22:46.685875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.827 [2024-12-05 12:22:46.686106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.827 [2024-12-05 12:22:46.686127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.827 [2024-12-05 12:22:46.691074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:15.827 [2024-12-05 12:22:46.691203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.827 [2024-12-05 12:22:46.691223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.087 [2024-12-05 12:22:46.696189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.087 [2024-12-05 12:22:46.696419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.087 [2024-12-05 12:22:46.696445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.087 [2024-12-05 12:22:46.701363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.087 [2024-12-05 12:22:46.701536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.087 [2024-12-05 12:22:46.701556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.087 [2024-12-05 12:22:46.706224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.087 [2024-12-05 12:22:46.706428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.087 [2024-12-05 12:22:46.706448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.087 [2024-12-05 12:22:46.710992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.087 [2024-12-05 12:22:46.711167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.087 [2024-12-05 12:22:46.711188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.087 [2024-12-05 12:22:46.716062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.087 [2024-12-05 12:22:46.716237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.087 [2024-12-05 12:22:46.716257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.087 [2024-12-05 12:22:46.720898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.087 [2024-12-05 12:22:46.721069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.087 [2024-12-05 12:22:46.721089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.087 [2024-12-05 12:22:46.725723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.087 [2024-12-05 12:22:46.725906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.087 [2024-12-05 12:22:46.725925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.087 [2024-12-05 12:22:46.730686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.087 [2024-12-05 12:22:46.730873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.087 [2024-12-05 12:22:46.730893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.087 [2024-12-05 12:22:46.735533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.087 [2024-12-05 12:22:46.735712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.087 [2024-12-05 12:22:46.735732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.087 [2024-12-05 12:22:46.740313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.087 [2024-12-05 12:22:46.740481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.087 [2024-12-05 12:22:46.740502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.087 [2024-12-05 12:22:46.745373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.087 [2024-12-05 12:22:46.745558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.745578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.750189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.750405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.750431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.755061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.755267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.755320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.760098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.760367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.760404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.764976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.765135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.765154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.769789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.769921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.769940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.774425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.774605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.774631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.779178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.779381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.779401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.783991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.784121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.784140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.788811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.788971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.788990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.793567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.793727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.793746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.798273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.798446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.798465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.803020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.803204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.803224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.807819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.807992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.808011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.812607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.812795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.812815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.817392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.817525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.817544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.822156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.822360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.822382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.826924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.827096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.827115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.831727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.831900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.831919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.836561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.836759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.836778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.841338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.088 [2024-12-05 12:22:46.841497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.088 [2024-12-05 12:22:46.841517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.088 [2024-12-05 12:22:46.846170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.846359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.846379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.850952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.851120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.851139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.855727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.855858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.855879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.860512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.860695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.860715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.865273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.865459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.865478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.870072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.870236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.870255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.874938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.875094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.875114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.879781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.879951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.879972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.884643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.884829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.884848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.889470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.889610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.889630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.894228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.894410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.894429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.898996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.899125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.899145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.903768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.903937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.903956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.908636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.908799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.908819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.913369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.913530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.913549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.918196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.918387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.918412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.922954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.923133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.923153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.927809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.927988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.928007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.932608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.932791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.932811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.937386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.937502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.937522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.942172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.942364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.089 [2024-12-05 12:22:46.942385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.089 [2024-12-05 12:22:46.946899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.089 [2024-12-05 12:22:46.947084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.090 [2024-12-05 12:22:46.947104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.090 [2024-12-05 12:22:46.951975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.090 [2024-12-05 12:22:46.952138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.090 [2024-12-05 12:22:46.952158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:46.957117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:46.957301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:46.957352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:46.962358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:46.962508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:46.962529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:46.967218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:46.967465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:46.967517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:46.972617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:46.972820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:46.972841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:46.977452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:46.977633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:46.977652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:46.982293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:46.982471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:46.982490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:46.987025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:46.987199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:46.987218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:46.991947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:46.992119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:46.992139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:46.996799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:46.996954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:46.996973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.001590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.001750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.001770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.006370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.006539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.006559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.011160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.011380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.011402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.015975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.016147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.016166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.020907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.021123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.021149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.025629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.025811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.025831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.030748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.031026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.031069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.035932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.036081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.036102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.041021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.041196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.041216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.045993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.046124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.046145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.050999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.051174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.051194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.056031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.056190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.056209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.060911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.061069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.061089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.065762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.065941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.065961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.070533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.070690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.070709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.075387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.075607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.075632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.080350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.080511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.350 [2024-12-05 12:22:47.080531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.350 [2024-12-05 12:22:47.085171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.350 [2024-12-05 12:22:47.085352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.085374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.351 [2024-12-05 12:22:47.089984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.351 [2024-12-05 12:22:47.090145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.090164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.351 [2024-12-05 12:22:47.094694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.351 [2024-12-05 12:22:47.094867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.094886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.351 [2024-12-05 12:22:47.099370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.351 [2024-12-05 12:22:47.099545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.099579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.351 [2024-12-05 12:22:47.104177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.351 [2024-12-05 12:22:47.104356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.104377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.351 [2024-12-05 12:22:47.108960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.351 [2024-12-05 12:22:47.109124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.109144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.351 [2024-12-05 12:22:47.113756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.351 [2024-12-05 12:22:47.113916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.113935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.351 [2024-12-05 12:22:47.118551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.351 [2024-12-05 12:22:47.118711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.118730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.351 [2024-12-05 12:22:47.123299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.351 [2024-12-05 12:22:47.123464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.123484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.351 [2024-12-05 12:22:47.128013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.351 [2024-12-05 12:22:47.128282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.128303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.351 [2024-12-05 12:22:47.132706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.351 [2024-12-05 12:22:47.132880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.132900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.351 [2024-12-05 12:22:47.137447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.351 [2024-12-05 12:22:47.137583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.137603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.351 [2024-12-05 12:22:47.142227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae2e0) with pdu=0x200016eff3c8 00:19:16.351 [2024-12-05 12:22:47.142411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.351 [2024-12-05 12:22:47.142431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.351 6378.00 IOPS, 797.25 MiB/s 00:19:16.351 Latency(us) 00:19:16.351 [2024-12-05T12:22:47.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.351 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:16.351 nvme0n1 : 2.00 6375.15 796.89 0.00 0.00 2504.56 1668.19 6225.92 00:19:16.351 [2024-12-05T12:22:47.220Z] =================================================================================================================== 00:19:16.351 [2024-12-05T12:22:47.220Z] Total : 6375.15 796.89 0.00 0.00 2504.56 1668.19 6225.92 00:19:16.351 { 00:19:16.351 "results": [ 00:19:16.351 { 00:19:16.351 "job": "nvme0n1", 00:19:16.351 "core_mask": "0x2", 00:19:16.351 "workload": "randwrite", 00:19:16.351 "status": "finished", 00:19:16.351 "queue_depth": 16, 00:19:16.351 "io_size": 131072, 00:19:16.351 "runtime": 2.003403, 00:19:16.351 "iops": 6375.152677718861, 00:19:16.351 "mibps": 796.8940847148576, 00:19:16.351 "io_failed": 0, 00:19:16.351 "io_timeout": 0, 00:19:16.351 "avg_latency_us": 2504.5606172593457, 00:19:16.351 "min_latency_us": 1668.189090909091, 00:19:16.351 "max_latency_us": 6225.92 00:19:16.351 } 00:19:16.351 ], 00:19:16.351 "core_count": 1 00:19:16.351 } 00:19:16.351 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:16.351 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:16.351 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:16.351 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:16.351 | .driver_specific 00:19:16.351 | .nvme_error 00:19:16.351 | .status_code 00:19:16.351 | .command_transient_transport_error' 00:19:16.610 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 412 > 0 )) 00:19:16.610 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94065 00:19:16.610 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94065 ']' 00:19:16.610 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94065 00:19:16.610 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:16.610 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.610 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94065 00:19:16.610 killing process with pid 94065 00:19:16.610 Received shutdown signal, test time was about 2.000000 seconds 00:19:16.610 00:19:16.610 Latency(us) 00:19:16.610 [2024-12-05T12:22:47.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.610 [2024-12-05T12:22:47.479Z] =================================================================================================================== 00:19:16.610 [2024-12-05T12:22:47.479Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.610 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:16.610 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:16.610 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94065' 00:19:16.610 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94065 00:19:16.610 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94065 00:19:16.869 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93809 00:19:16.869 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93809 ']' 00:19:16.869 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93809 00:19:16.869 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:16.869 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.869 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93809 00:19:16.869 killing process with pid 93809 00:19:16.869 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.869 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.869 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93809' 00:19:16.869 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93809 00:19:16.869 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93809 00:19:17.128 ************************************ 00:19:17.128 END TEST nvmf_digest_error 00:19:17.128 ************************************ 00:19:17.128 00:19:17.128 real 0m15.843s 00:19:17.128 user 0m29.351s 00:19:17.128 sys 0m5.136s 00:19:17.128 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.128 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:17.128 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:17.128 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:17.128 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:17.128 12:22:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:17.388 rmmod nvme_tcp 00:19:17.388 rmmod nvme_fabrics 00:19:17.388 rmmod nvme_keyring 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 93809 ']' 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 93809 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 93809 ']' 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 93809 00:19:17.388 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (93809) - No such process 00:19:17.388 Process with pid 93809 is not found 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 93809 is not found' 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:17.388 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:19:17.647 00:19:17.647 real 0m33.054s 00:19:17.647 user 0m59.629s 00:19:17.647 sys 0m10.743s 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:17.647 ************************************ 00:19:17.647 END TEST nvmf_digest 00:19:17.647 ************************************ 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:17.647 12:22:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.648 12:22:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.648 ************************************ 00:19:17.648 START TEST nvmf_mdns_discovery 00:19:17.648 ************************************ 00:19:17.648 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:17.648 * Looking for test storage... 00:19:17.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:17.648 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:17.648 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:19:17.648 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:17.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.908 --rc genhtml_branch_coverage=1 00:19:17.908 --rc genhtml_function_coverage=1 00:19:17.908 --rc genhtml_legend=1 00:19:17.908 --rc geninfo_all_blocks=1 00:19:17.908 --rc geninfo_unexecuted_blocks=1 00:19:17.908 00:19:17.908 ' 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:17.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.908 --rc genhtml_branch_coverage=1 00:19:17.908 --rc genhtml_function_coverage=1 00:19:17.908 --rc genhtml_legend=1 00:19:17.908 --rc geninfo_all_blocks=1 00:19:17.908 --rc geninfo_unexecuted_blocks=1 00:19:17.908 00:19:17.908 ' 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:17.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.908 --rc genhtml_branch_coverage=1 00:19:17.908 --rc genhtml_function_coverage=1 00:19:17.908 --rc genhtml_legend=1 00:19:17.908 --rc geninfo_all_blocks=1 00:19:17.908 --rc geninfo_unexecuted_blocks=1 00:19:17.908 00:19:17.908 ' 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:17.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.908 --rc genhtml_branch_coverage=1 00:19:17.908 --rc genhtml_function_coverage=1 00:19:17.908 --rc genhtml_legend=1 00:19:17.908 --rc geninfo_all_blocks=1 00:19:17.908 --rc geninfo_unexecuted_blocks=1 00:19:17.908 00:19:17.908 ' 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.908 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.909 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:17.909 Cannot find device "nvmf_init_br" 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:17.909 Cannot find device "nvmf_init_br2" 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:17.909 Cannot find device "nvmf_tgt_br" 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.909 Cannot find device "nvmf_tgt_br2" 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:17.909 Cannot find device "nvmf_init_br" 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:17.909 Cannot find device "nvmf_init_br2" 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:17.909 Cannot find device "nvmf_tgt_br" 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:17.909 Cannot find device "nvmf_tgt_br2" 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:17.909 Cannot find device "nvmf_br" 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:17.909 Cannot find device "nvmf_init_if" 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:17.909 Cannot find device "nvmf_init_if2" 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:17.909 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:18.169 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:18.169 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:19:18.169 00:19:18.169 --- 10.0.0.3 ping statistics --- 00:19:18.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.169 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:18.169 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:18.169 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.102 ms 00:19:18.169 00:19:18.169 --- 10.0.0.4 ping statistics --- 00:19:18.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.169 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:18.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:18.169 00:19:18.169 --- 10.0.0.1 ping statistics --- 00:19:18.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.169 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:18.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:19:18.169 00:19:18.169 --- 10.0.0.2 ping statistics --- 00:19:18.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.169 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=94417 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 94417 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 94417 ']' 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.169 12:22:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.428 [2024-12-05 12:22:49.059455] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:19:18.428 [2024-12-05 12:22:49.059540] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.428 [2024-12-05 12:22:49.211028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.428 [2024-12-05 12:22:49.269771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.428 [2024-12-05 12:22:49.269837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.428 [2024-12-05 12:22:49.269852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.428 [2024-12-05 12:22:49.269863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.428 [2024-12-05 12:22:49.269873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.428 [2024-12-05 12:22:49.270360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.687 [2024-12-05 12:22:49.540760] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.687 [2024-12-05 12:22:49.548937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.687 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.946 null0 00:19:18.946 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.946 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:18.946 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.946 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.946 null1 00:19:18.946 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.946 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:19:18.946 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.946 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.946 null2 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.947 null3 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94455 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94455 /tmp/host.sock 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 94455 ']' 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:18.947 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.947 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.947 [2024-12-05 12:22:49.657995] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:19:18.947 [2024-12-05 12:22:49.658080] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94455 ] 00:19:18.947 [2024-12-05 12:22:49.811557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.205 [2024-12-05 12:22:49.865128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.205 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.205 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:19.205 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:19:19.205 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:19:19.205 12:22:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:19:19.464 12:22:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94471 00:19:19.464 12:22:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:19:19.464 12:22:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:19:19.464 12:22:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:19:19.464 Process 1064 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:19:19.464 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:19:19.464 Successfully dropped root privileges. 00:19:19.464 avahi-daemon 0.8 starting up. 00:19:19.464 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:19:19.464 Successfully called chroot(). 00:19:19.464 Successfully dropped remaining capabilities. 00:19:19.464 No service file found in /etc/avahi/services. 00:19:20.400 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:19:20.400 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:19:20.400 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:19:20.400 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:19:20.400 Network interface enumeration completed. 00:19:20.400 Registering new address record for fe80::f0:39ff:fe07:b678 on nvmf_tgt_if2.*. 00:19:20.400 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:19:20.400 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:19:20.400 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:19:20.400 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1937261434. 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.400 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.401 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:19:20.401 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:20.401 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:20.401 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.401 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:20.401 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:20.401 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.401 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.660 [2024-12-05 12:22:51.426068] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.660 [2024-12-05 12:22:51.489178] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.660 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.920 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.920 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:19:20.920 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.920 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.920 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.920 12:22:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:19:21.486 [2024-12-05 12:22:52.326069] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:22.053 [2024-12-05 12:22:52.726090] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:22.053 [2024-12-05 12:22:52.726118] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:19:22.053 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:22.053 cookie is 0 00:19:22.053 is_local: 1 00:19:22.053 our_own: 0 00:19:22.053 wide_area: 0 00:19:22.053 multicast: 1 00:19:22.053 cached: 1 00:19:22.053 [2024-12-05 12:22:52.826072] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:22.053 [2024-12-05 12:22:52.826098] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:19:22.053 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:22.053 cookie is 0 00:19:22.053 is_local: 1 00:19:22.053 our_own: 0 00:19:22.053 wide_area: 0 00:19:22.053 multicast: 1 00:19:22.053 cached: 1 00:19:22.988 [2024-12-05 12:22:53.727037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.988 [2024-12-05 12:22:53.727086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24850 with addr=10.0.0.4, port=8009 00:19:22.988 [2024-12-05 12:22:53.727123] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:22.988 [2024-12-05 12:22:53.727138] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:22.988 [2024-12-05 12:22:53.727148] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:19:22.988 [2024-12-05 12:22:53.836043] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:22.988 [2024-12-05 12:22:53.836068] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:22.988 [2024-12-05 12:22:53.836087] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:23.248 [2024-12-05 12:22:53.922124] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:19:23.248 [2024-12-05 12:22:53.976511] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:23.248 [2024-12-05 12:22:53.977332] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f59a10:1 started. 00:19:23.248 [2024-12-05 12:22:53.979066] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:19:23.248 [2024-12-05 12:22:53.979093] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:23.248 [2024-12-05 12:22:53.984456] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f59a10 was disconnected and freed. delete nvme_qpair. 00:19:24.194 [2024-12-05 12:22:54.726948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.194 [2024-12-05 12:22:54.727023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f425e0 with addr=10.0.0.4, port=8009 00:19:24.194 [2024-12-05 12:22:54.727042] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:24.194 [2024-12-05 12:22:54.727052] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:24.194 [2024-12-05 12:22:54.727061] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:19:25.128 [2024-12-05 12:22:55.726919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.128 [2024-12-05 12:22:55.726959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f427c0 with addr=10.0.0.4, port=8009 00:19:25.128 [2024-12-05 12:22:55.726976] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:25.128 [2024-12-05 12:22:55.726985] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:25.128 [2024-12-05 12:22:55.726994] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:19:25.696 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:19:25.696 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:19:25.696 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:19:25.696 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:19:25.696 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:19:25.696 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:19:25.696 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:19:25.955 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:19:25.955 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:25.955 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:25.955 [2024-12-05 12:22:56.574794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:19:25.955 [2024-12-05 12:22:56.576817] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:25.955 [2024-12-05 12:22:56.576869] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:25.955 [2024-12-05 12:22:56.582786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:19:25.955 [2024-12-05 12:22:56.583819] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.955 12:22:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:19:25.955 [2024-12-05 12:22:56.714905] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:25.955 [2024-12-05 12:22:56.714941] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:25.955 [2024-12-05 12:22:56.736120] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:19:25.955 [2024-12-05 12:22:56.736145] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:19:25.955 [2024-12-05 12:22:56.736162] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:19:25.955 [2024-12-05 12:22:56.801246] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:25.955 [2024-12-05 12:22:56.822198] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:19:26.215 [2024-12-05 12:22:56.876556] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:19:26.215 [2024-12-05 12:22:56.877133] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x1f56b00:1 started. 00:19:26.215 [2024-12-05 12:22:56.878639] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:19:26.215 [2024-12-05 12:22:56.878680] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:19:26.215 [2024-12-05 12:22:56.884797] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x1f56b00 was disconnected and freed. delete nvme_qpair. 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:19:26.783 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:19:26.783 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:19:26.783 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:19:26.783 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:26.783 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:26.783 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:26.783 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:26.783 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:27.043 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:27.044 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:27.303 [2024-12-05 12:22:57.990132] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f58f20:1 started. 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:27.303 [2024-12-05 12:22:57.995059] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f58f20 was disconnected and freed. delete nvme_qpair. 00:19:27.303 [2024-12-05 12:22:57.999788] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x1f557a0:1 started. 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.303 12:22:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:19:27.303 [2024-12-05 12:22:58.004902] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x1f557a0 was disconnected and freed. delete nvme_qpair. 00:19:27.303 [2024-12-05 12:22:58.126075] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:27.303 [2024-12-05 12:22:58.126097] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:19:27.303 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:27.303 cookie is 0 00:19:27.303 is_local: 1 00:19:27.303 our_own: 0 00:19:27.303 wide_area: 0 00:19:27.303 multicast: 1 00:19:27.303 cached: 1 00:19:27.303 [2024-12-05 12:22:58.126108] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:27.563 [2024-12-05 12:22:58.226076] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:27.563 [2024-12-05 12:22:58.226098] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:19:27.563 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:27.563 cookie is 0 00:19:27.563 is_local: 1 00:19:27.563 our_own: 0 00:19:27.563 wide_area: 0 00:19:27.563 multicast: 1 00:19:27.563 cached: 1 00:19:27.563 [2024-12-05 12:22:58.226108] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:28.498 [2024-12-05 12:22:59.123759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:28.498 [2024-12-05 12:22:59.124750] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:28.498 [2024-12-05 12:22:59.124785] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:28.498 [2024-12-05 12:22:59.124826] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:19:28.498 [2024-12-05 12:22:59.124843] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:28.498 [2024-12-05 12:22:59.131744] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:19:28.498 [2024-12-05 12:22:59.132784] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:28.498 [2024-12-05 12:22:59.132865] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.498 12:22:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:19:28.498 [2024-12-05 12:22:59.262828] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:19:28.498 [2024-12-05 12:22:59.263144] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:19:28.498 [2024-12-05 12:22:59.321165] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:19:28.498 [2024-12-05 12:22:59.321217] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:19:28.498 [2024-12-05 12:22:59.321229] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:28.498 [2024-12-05 12:22:59.321234] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:28.498 [2024-12-05 12:22:59.321251] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:28.498 [2024-12-05 12:22:59.321444] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:19:28.498 [2024-12-05 12:22:59.321474] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:19:28.498 [2024-12-05 12:22:59.321483] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:19:28.498 [2024-12-05 12:22:59.321488] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:19:28.498 [2024-12-05 12:22:59.321504] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:19:28.756 [2024-12-05 12:22:59.366916] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:28.756 [2024-12-05 12:22:59.367066] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:28.756 [2024-12-05 12:22:59.367132] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:19:28.756 [2024-12-05 12:22:59.367143] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:19:29.321 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:19:29.321 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:29.321 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:29.321 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:29.321 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.321 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:29.321 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:29.321 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:19:29.579 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:29.580 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.580 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:29.580 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:19:29.580 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.580 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:19:29.580 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:19:29.580 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:19:29.580 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:29.580 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.580 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:29.580 [2024-12-05 12:23:00.444669] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:29.580 [2024-12-05 12:23:00.444707] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:29.580 [2024-12-05 12:23:00.444746] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:19:29.580 [2024-12-05 12:23:00.444763] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:19:29.842 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.842 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:19:29.842 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.842 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:29.842 [2024-12-05 12:23:00.451688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.842 [2024-12-05 12:23:00.451725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.842 [2024-12-05 12:23:00.451756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.842 [2024-12-05 12:23:00.451766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.842 [2024-12-05 12:23:00.451776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.842 [2024-12-05 12:23:00.451784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.842 [2024-12-05 12:23:00.451795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.842 [2024-12-05 12:23:00.451803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.842 [2024-12-05 12:23:00.451813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.842 [2024-12-05 12:23:00.452898] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:29.842 [2024-12-05 12:23:00.452985] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:19:29.842 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.842 12:23:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:19:29.842 [2024-12-05 12:23:00.458770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.842 [2024-12-05 12:23:00.458800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.842 [2024-12-05 12:23:00.458830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.842 [2024-12-05 12:23:00.458840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.842 [2024-12-05 12:23:00.458850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.842 [2024-12-05 12:23:00.458858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.842 [2024-12-05 12:23:00.458868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:29.842 [2024-12-05 12:23:00.458877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:29.842 [2024-12-05 12:23:00.458902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b40 is same with the state(6) to be set 00:19:29.842 [2024-12-05 12:23:00.461569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.842 [2024-12-05 12:23:00.468730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b40 (9): Bad file descriptor 00:19:29.842 [2024-12-05 12:23:00.471656] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:29.842 [2024-12-05 12:23:00.471687] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:29.842 [2024-12-05 12:23:00.471696] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:29.842 [2024-12-05 12:23:00.471701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:29.842 [2024-12-05 12:23:00.471755] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:29.842 [2024-12-05 12:23:00.471854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.842 [2024-12-05 12:23:00.471912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f36330 with addr=10.0.0.3, port=4420 00:19:29.842 [2024-12-05 12:23:00.471924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.842 [2024-12-05 12:23:00.471944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.842 [2024-12-05 12:23:00.471961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:29.842 [2024-12-05 12:23:00.471971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:29.842 [2024-12-05 12:23:00.471983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:29.842 [2024-12-05 12:23:00.471999] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:29.842 [2024-12-05 12:23:00.472008] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:29.842 [2024-12-05 12:23:00.472013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:29.842 [2024-12-05 12:23:00.478736] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:19:29.842 [2024-12-05 12:23:00.478762] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:19:29.842 [2024-12-05 12:23:00.478769] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:19:29.842 [2024-12-05 12:23:00.478774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:19:29.842 [2024-12-05 12:23:00.478818] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:19:29.842 [2024-12-05 12:23:00.478907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.842 [2024-12-05 12:23:00.478930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b40 with addr=10.0.0.4, port=4420 00:19:29.842 [2024-12-05 12:23:00.478942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b40 is same with the state(6) to be set 00:19:29.842 [2024-12-05 12:23:00.478959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b40 (9): Bad file descriptor 00:19:29.842 [2024-12-05 12:23:00.478974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:19:29.842 [2024-12-05 12:23:00.478984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:19:29.842 [2024-12-05 12:23:00.478993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:19:29.842 [2024-12-05 12:23:00.479002] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:19:29.842 [2024-12-05 12:23:00.479008] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:19:29.842 [2024-12-05 12:23:00.479013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:19:29.842 [2024-12-05 12:23:00.481764] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:29.842 [2024-12-05 12:23:00.481791] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:29.842 [2024-12-05 12:23:00.481798] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:29.842 [2024-12-05 12:23:00.481803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:29.842 [2024-12-05 12:23:00.481845] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:29.842 [2024-12-05 12:23:00.481943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.842 [2024-12-05 12:23:00.481965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f36330 with addr=10.0.0.3, port=4420 00:19:29.842 [2024-12-05 12:23:00.481977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.842 [2024-12-05 12:23:00.481994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.842 [2024-12-05 12:23:00.482029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:29.842 [2024-12-05 12:23:00.482042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:29.842 [2024-12-05 12:23:00.482052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:29.842 [2024-12-05 12:23:00.482061] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:29.842 [2024-12-05 12:23:00.482067] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:29.842 [2024-12-05 12:23:00.482072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:29.842 [2024-12-05 12:23:00.488827] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:19:29.843 [2024-12-05 12:23:00.488853] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:19:29.843 [2024-12-05 12:23:00.488861] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:19:29.843 [2024-12-05 12:23:00.488866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:19:29.843 [2024-12-05 12:23:00.488923] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:19:29.843 [2024-12-05 12:23:00.489008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.843 [2024-12-05 12:23:00.489030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b40 with addr=10.0.0.4, port=4420 00:19:29.843 [2024-12-05 12:23:00.489041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b40 is same with the state(6) to be set 00:19:29.843 [2024-12-05 12:23:00.489058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b40 (9): Bad file descriptor 00:19:29.843 [2024-12-05 12:23:00.489092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:19:29.843 [2024-12-05 12:23:00.489105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:19:29.843 [2024-12-05 12:23:00.489114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:19:29.843 [2024-12-05 12:23:00.489123] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:19:29.843 [2024-12-05 12:23:00.489129] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:19:29.843 [2024-12-05 12:23:00.489134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:19:29.843 [2024-12-05 12:23:00.491853] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:29.843 [2024-12-05 12:23:00.491880] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:29.843 [2024-12-05 12:23:00.491887] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:29.843 [2024-12-05 12:23:00.491891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:29.843 [2024-12-05 12:23:00.491933] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:29.843 [2024-12-05 12:23:00.491984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.843 [2024-12-05 12:23:00.492005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f36330 with addr=10.0.0.3, port=4420 00:19:29.843 [2024-12-05 12:23:00.492017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.843 [2024-12-05 12:23:00.492032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.843 [2024-12-05 12:23:00.492047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:29.843 [2024-12-05 12:23:00.492056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:29.843 [2024-12-05 12:23:00.492065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:29.843 [2024-12-05 12:23:00.492073] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:29.843 [2024-12-05 12:23:00.492079] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:29.843 [2024-12-05 12:23:00.492084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:29.843 [2024-12-05 12:23:00.498934] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:19:29.843 [2024-12-05 12:23:00.498963] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:19:29.843 [2024-12-05 12:23:00.498970] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:19:29.843 [2024-12-05 12:23:00.498974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:19:29.843 [2024-12-05 12:23:00.499017] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:19:29.843 [2024-12-05 12:23:00.499071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.843 [2024-12-05 12:23:00.499092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b40 with addr=10.0.0.4, port=4420 00:19:29.843 [2024-12-05 12:23:00.499104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b40 is same with the state(6) to be set 00:19:29.843 [2024-12-05 12:23:00.499120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b40 (9): Bad file descriptor 00:19:29.843 [2024-12-05 12:23:00.499221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:19:29.843 [2024-12-05 12:23:00.499237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:19:29.843 [2024-12-05 12:23:00.499259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:19:29.843 [2024-12-05 12:23:00.499270] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:19:29.843 [2024-12-05 12:23:00.499276] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:19:29.843 [2024-12-05 12:23:00.499297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:19:29.843 [2024-12-05 12:23:00.501943] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:29.843 [2024-12-05 12:23:00.501967] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:29.843 [2024-12-05 12:23:00.501974] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:29.843 [2024-12-05 12:23:00.501979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:29.843 [2024-12-05 12:23:00.502021] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:29.843 [2024-12-05 12:23:00.502073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.843 [2024-12-05 12:23:00.502095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f36330 with addr=10.0.0.3, port=4420 00:19:29.843 [2024-12-05 12:23:00.502107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.843 [2024-12-05 12:23:00.502123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.843 [2024-12-05 12:23:00.502157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:29.843 [2024-12-05 12:23:00.502169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:29.843 [2024-12-05 12:23:00.502178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:29.843 [2024-12-05 12:23:00.502202] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:29.843 [2024-12-05 12:23:00.502208] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:29.843 [2024-12-05 12:23:00.502229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:29.843 [2024-12-05 12:23:00.509027] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:19:29.843 [2024-12-05 12:23:00.509051] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:19:29.843 [2024-12-05 12:23:00.509058] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:19:29.843 [2024-12-05 12:23:00.509063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:19:29.843 [2024-12-05 12:23:00.509105] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:19:29.843 [2024-12-05 12:23:00.509157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.843 [2024-12-05 12:23:00.509179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b40 with addr=10.0.0.4, port=4420 00:19:29.843 [2024-12-05 12:23:00.509190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b40 is same with the state(6) to be set 00:19:29.843 [2024-12-05 12:23:00.509205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b40 (9): Bad file descriptor 00:19:29.843 [2024-12-05 12:23:00.509237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:19:29.843 [2024-12-05 12:23:00.509249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:19:29.843 [2024-12-05 12:23:00.509258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:19:29.843 [2024-12-05 12:23:00.509266] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:19:29.843 [2024-12-05 12:23:00.509287] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:19:29.843 [2024-12-05 12:23:00.509293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:19:29.843 [2024-12-05 12:23:00.512030] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:29.843 [2024-12-05 12:23:00.512056] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:29.843 [2024-12-05 12:23:00.512063] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:29.843 [2024-12-05 12:23:00.512068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:29.843 [2024-12-05 12:23:00.512109] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:29.843 [2024-12-05 12:23:00.512159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.843 [2024-12-05 12:23:00.512181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f36330 with addr=10.0.0.3, port=4420 00:19:29.843 [2024-12-05 12:23:00.512192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.843 [2024-12-05 12:23:00.512240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.843 [2024-12-05 12:23:00.512276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:29.843 [2024-12-05 12:23:00.512289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:29.843 [2024-12-05 12:23:00.512298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:29.843 [2024-12-05 12:23:00.512307] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:29.843 [2024-12-05 12:23:00.512313] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:29.843 [2024-12-05 12:23:00.512318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:29.843 [2024-12-05 12:23:00.519113] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:19:29.843 [2024-12-05 12:23:00.519136] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:19:29.843 [2024-12-05 12:23:00.519143] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:19:29.844 [2024-12-05 12:23:00.519148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:19:29.844 [2024-12-05 12:23:00.519189] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:19:29.844 [2024-12-05 12:23:00.519240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.844 [2024-12-05 12:23:00.519273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b40 with addr=10.0.0.4, port=4420 00:19:29.844 [2024-12-05 12:23:00.519298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b40 is same with the state(6) to be set 00:19:29.844 [2024-12-05 12:23:00.519316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b40 (9): Bad file descriptor 00:19:29.844 [2024-12-05 12:23:00.519349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:19:29.844 [2024-12-05 12:23:00.519361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:19:29.844 [2024-12-05 12:23:00.519370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:19:29.844 [2024-12-05 12:23:00.519378] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:19:29.844 [2024-12-05 12:23:00.519384] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:19:29.844 [2024-12-05 12:23:00.519389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:19:29.844 [2024-12-05 12:23:00.522118] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:29.844 [2024-12-05 12:23:00.522157] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:29.844 [2024-12-05 12:23:00.522165] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:29.844 [2024-12-05 12:23:00.522170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:29.844 [2024-12-05 12:23:00.522212] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:29.844 [2024-12-05 12:23:00.522263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.844 [2024-12-05 12:23:00.522284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f36330 with addr=10.0.0.3, port=4420 00:19:29.844 [2024-12-05 12:23:00.522309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.844 [2024-12-05 12:23:00.522327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.844 [2024-12-05 12:23:00.522360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:29.844 [2024-12-05 12:23:00.522372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:29.844 [2024-12-05 12:23:00.522382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:29.844 [2024-12-05 12:23:00.522390] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:29.844 [2024-12-05 12:23:00.522396] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:29.844 [2024-12-05 12:23:00.522400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:29.844 [2024-12-05 12:23:00.529197] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:19:29.844 [2024-12-05 12:23:00.529222] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:19:29.844 [2024-12-05 12:23:00.529229] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:19:29.844 [2024-12-05 12:23:00.529233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:19:29.844 [2024-12-05 12:23:00.529277] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:19:29.844 [2024-12-05 12:23:00.529340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.844 [2024-12-05 12:23:00.529361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b40 with addr=10.0.0.4, port=4420 00:19:29.844 [2024-12-05 12:23:00.529372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b40 is same with the state(6) to be set 00:19:29.844 [2024-12-05 12:23:00.529388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b40 (9): Bad file descriptor 00:19:29.844 [2024-12-05 12:23:00.529421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:19:29.844 [2024-12-05 12:23:00.529432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:19:29.844 [2024-12-05 12:23:00.529442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:19:29.844 [2024-12-05 12:23:00.529450] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:19:29.844 [2024-12-05 12:23:00.529455] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:19:29.844 [2024-12-05 12:23:00.529477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:19:29.844 [2024-12-05 12:23:00.532221] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:29.844 [2024-12-05 12:23:00.532261] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:29.844 [2024-12-05 12:23:00.532269] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:29.844 [2024-12-05 12:23:00.532274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:29.844 [2024-12-05 12:23:00.532328] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:29.844 [2024-12-05 12:23:00.532383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.844 [2024-12-05 12:23:00.532405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f36330 with addr=10.0.0.3, port=4420 00:19:29.844 [2024-12-05 12:23:00.532416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.844 [2024-12-05 12:23:00.532434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.844 [2024-12-05 12:23:00.532470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:29.844 [2024-12-05 12:23:00.532499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:29.844 [2024-12-05 12:23:00.532509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:29.844 [2024-12-05 12:23:00.532519] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:29.844 [2024-12-05 12:23:00.532525] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:29.844 [2024-12-05 12:23:00.532530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:29.844 [2024-12-05 12:23:00.539319] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:19:29.844 [2024-12-05 12:23:00.539346] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:19:29.844 [2024-12-05 12:23:00.539354] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:19:29.844 [2024-12-05 12:23:00.539359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:19:29.844 [2024-12-05 12:23:00.539380] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:19:29.844 [2024-12-05 12:23:00.539432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.844 [2024-12-05 12:23:00.539454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b40 with addr=10.0.0.4, port=4420 00:19:29.844 [2024-12-05 12:23:00.539465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b40 is same with the state(6) to be set 00:19:29.844 [2024-12-05 12:23:00.539481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b40 (9): Bad file descriptor 00:19:29.844 [2024-12-05 12:23:00.539598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:19:29.844 [2024-12-05 12:23:00.539616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:19:29.844 [2024-12-05 12:23:00.539626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:19:29.844 [2024-12-05 12:23:00.539635] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:19:29.844 [2024-12-05 12:23:00.539641] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:19:29.844 [2024-12-05 12:23:00.539646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:19:29.844 [2024-12-05 12:23:00.542344] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:29.844 [2024-12-05 12:23:00.542367] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:29.844 [2024-12-05 12:23:00.542375] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:29.844 [2024-12-05 12:23:00.542380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:29.844 [2024-12-05 12:23:00.542426] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:29.844 [2024-12-05 12:23:00.542481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.844 [2024-12-05 12:23:00.542503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f36330 with addr=10.0.0.3, port=4420 00:19:29.844 [2024-12-05 12:23:00.542530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.844 [2024-12-05 12:23:00.542545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.844 [2024-12-05 12:23:00.542648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:29.844 [2024-12-05 12:23:00.542681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:29.844 [2024-12-05 12:23:00.542692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:29.844 [2024-12-05 12:23:00.542701] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:29.844 [2024-12-05 12:23:00.542707] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:29.844 [2024-12-05 12:23:00.542712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:29.845 [2024-12-05 12:23:00.549389] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:19:29.845 [2024-12-05 12:23:00.549414] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:19:29.845 [2024-12-05 12:23:00.549421] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.549426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:19:29.845 [2024-12-05 12:23:00.549469] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.549523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.845 [2024-12-05 12:23:00.549544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b40 with addr=10.0.0.4, port=4420 00:19:29.845 [2024-12-05 12:23:00.549556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b40 is same with the state(6) to be set 00:19:29.845 [2024-12-05 12:23:00.549572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b40 (9): Bad file descriptor 00:19:29.845 [2024-12-05 12:23:00.549605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:19:29.845 [2024-12-05 12:23:00.549616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:19:29.845 [2024-12-05 12:23:00.549625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:19:29.845 [2024-12-05 12:23:00.549634] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:19:29.845 [2024-12-05 12:23:00.549639] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:19:29.845 [2024-12-05 12:23:00.549644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:19:29.845 [2024-12-05 12:23:00.552434] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:29.845 [2024-12-05 12:23:00.552458] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:29.845 [2024-12-05 12:23:00.552465] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.552470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:29.845 [2024-12-05 12:23:00.552513] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.552564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.845 [2024-12-05 12:23:00.552585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f36330 with addr=10.0.0.3, port=4420 00:19:29.845 [2024-12-05 12:23:00.552596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.845 [2024-12-05 12:23:00.552612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.845 [2024-12-05 12:23:00.552642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:29.845 [2024-12-05 12:23:00.552654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:29.845 [2024-12-05 12:23:00.552663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:29.845 [2024-12-05 12:23:00.552671] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:29.845 [2024-12-05 12:23:00.552676] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:29.845 [2024-12-05 12:23:00.552681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:29.845 [2024-12-05 12:23:00.559478] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:19:29.845 [2024-12-05 12:23:00.559502] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:19:29.845 [2024-12-05 12:23:00.559509] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.559514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:19:29.845 [2024-12-05 12:23:00.559557] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.559608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.845 [2024-12-05 12:23:00.559629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b40 with addr=10.0.0.4, port=4420 00:19:29.845 [2024-12-05 12:23:00.559640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b40 is same with the state(6) to be set 00:19:29.845 [2024-12-05 12:23:00.559655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b40 (9): Bad file descriptor 00:19:29.845 [2024-12-05 12:23:00.559686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:19:29.845 [2024-12-05 12:23:00.559697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:19:29.845 [2024-12-05 12:23:00.559723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:19:29.845 [2024-12-05 12:23:00.559748] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:19:29.845 [2024-12-05 12:23:00.559754] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:19:29.845 [2024-12-05 12:23:00.559759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:19:29.845 [2024-12-05 12:23:00.562522] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:29.845 [2024-12-05 12:23:00.562546] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:29.845 [2024-12-05 12:23:00.562553] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.562558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:29.845 [2024-12-05 12:23:00.562599] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.562650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.845 [2024-12-05 12:23:00.562671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f36330 with addr=10.0.0.3, port=4420 00:19:29.845 [2024-12-05 12:23:00.562681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.845 [2024-12-05 12:23:00.562697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.845 [2024-12-05 12:23:00.562728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:29.845 [2024-12-05 12:23:00.562772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:29.845 [2024-12-05 12:23:00.562782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:29.845 [2024-12-05 12:23:00.562790] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:29.845 [2024-12-05 12:23:00.562796] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:29.845 [2024-12-05 12:23:00.562801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:29.845 [2024-12-05 12:23:00.569566] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:19:29.845 [2024-12-05 12:23:00.569589] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:19:29.845 [2024-12-05 12:23:00.569596] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.569601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:19:29.845 [2024-12-05 12:23:00.569644] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.569695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.845 [2024-12-05 12:23:00.569716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b40 with addr=10.0.0.4, port=4420 00:19:29.845 [2024-12-05 12:23:00.569727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b40 is same with the state(6) to be set 00:19:29.845 [2024-12-05 12:23:00.569743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b40 (9): Bad file descriptor 00:19:29.845 [2024-12-05 12:23:00.569775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:19:29.845 [2024-12-05 12:23:00.569786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:19:29.845 [2024-12-05 12:23:00.569795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:19:29.845 [2024-12-05 12:23:00.569803] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:19:29.845 [2024-12-05 12:23:00.569809] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:19:29.845 [2024-12-05 12:23:00.569813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:19:29.845 [2024-12-05 12:23:00.572609] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:29.845 [2024-12-05 12:23:00.572632] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:29.845 [2024-12-05 12:23:00.572639] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.572644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:29.845 [2024-12-05 12:23:00.572685] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.572735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.845 [2024-12-05 12:23:00.572756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f36330 with addr=10.0.0.3, port=4420 00:19:29.845 [2024-12-05 12:23:00.572767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.845 [2024-12-05 12:23:00.572783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.845 [2024-12-05 12:23:00.572816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:29.845 [2024-12-05 12:23:00.572827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:29.845 [2024-12-05 12:23:00.572836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:29.845 [2024-12-05 12:23:00.572844] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:29.845 [2024-12-05 12:23:00.572850] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:29.845 [2024-12-05 12:23:00.572855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:29.845 [2024-12-05 12:23:00.579652] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:19:29.845 [2024-12-05 12:23:00.579724] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:19:29.845 [2024-12-05 12:23:00.579733] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.579738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:19:29.845 [2024-12-05 12:23:00.579781] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:19:29.845 [2024-12-05 12:23:00.579838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.845 [2024-12-05 12:23:00.579860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42b40 with addr=10.0.0.4, port=4420 00:19:29.845 [2024-12-05 12:23:00.579872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42b40 is same with the state(6) to be set 00:19:29.845 [2024-12-05 12:23:00.579924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42b40 (9): Bad file descriptor 00:19:29.845 [2024-12-05 12:23:00.579960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:19:29.845 [2024-12-05 12:23:00.579970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:19:29.845 [2024-12-05 12:23:00.579979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:19:29.845 [2024-12-05 12:23:00.579988] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:19:29.846 [2024-12-05 12:23:00.579995] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:19:29.846 [2024-12-05 12:23:00.580000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:19:29.846 [2024-12-05 12:23:00.582697] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:29.846 [2024-12-05 12:23:00.582737] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:29.846 [2024-12-05 12:23:00.582745] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:29.846 [2024-12-05 12:23:00.582750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:29.846 [2024-12-05 12:23:00.582791] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:29.846 [2024-12-05 12:23:00.582844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:29.846 [2024-12-05 12:23:00.582866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f36330 with addr=10.0.0.3, port=4420 00:19:29.846 [2024-12-05 12:23:00.582877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f36330 is same with the state(6) to be set 00:19:29.846 [2024-12-05 12:23:00.582933] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:19:29.846 [2024-12-05 12:23:00.583002] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:19:29.846 [2024-12-05 12:23:00.583046] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:19:29.846 [2024-12-05 12:23:00.583085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f36330 (9): Bad file descriptor 00:19:29.846 [2024-12-05 12:23:00.583120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:29.846 [2024-12-05 12:23:00.583134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:29.846 [2024-12-05 12:23:00.583144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:29.846 [2024-12-05 12:23:00.583153] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:29.846 [2024-12-05 12:23:00.583160] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:29.846 [2024-12-05 12:23:00.583165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:29.846 [2024-12-05 12:23:00.583837] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:29.846 [2024-12-05 12:23:00.583884] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:29.846 [2024-12-05 12:23:00.583934] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:29.846 [2024-12-05 12:23:00.668901] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:19:29.846 [2024-12-05 12:23:00.669895] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:30.782 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:30.783 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:31.041 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.042 12:23:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:19:31.042 [2024-12-05 12:23:01.826067] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:31.976 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.235 [2024-12-05 12:23:02.979071] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:19:32.235 2024/12/05 12:23:02 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:19:32.235 request: 00:19:32.235 { 00:19:32.235 "method": "bdev_nvme_start_mdns_discovery", 00:19:32.235 "params": { 00:19:32.235 "name": "mdns", 00:19:32.235 "svcname": "_nvme-disc._http", 00:19:32.235 "hostnqn": "nqn.2021-12.io.spdk:test" 00:19:32.235 } 00:19:32.235 } 00:19:32.235 Got JSON-RPC error response 00:19:32.235 GoRPCClient: error on JSON-RPC call 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.235 12:23:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:19:32.801 [2024-12-05 12:23:03.567656] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:32.802 [2024-12-05 12:23:03.667655] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:33.059 [2024-12-05 12:23:03.767658] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:33.059 [2024-12-05 12:23:03.767678] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:19:33.059 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:33.059 cookie is 0 00:19:33.059 is_local: 1 00:19:33.059 our_own: 0 00:19:33.059 wide_area: 0 00:19:33.059 multicast: 1 00:19:33.059 cached: 1 00:19:33.059 [2024-12-05 12:23:03.867658] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:33.059 [2024-12-05 12:23:03.867682] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:19:33.059 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:33.059 cookie is 0 00:19:33.059 is_local: 1 00:19:33.059 our_own: 0 00:19:33.059 wide_area: 0 00:19:33.059 multicast: 1 00:19:33.059 cached: 1 00:19:33.059 [2024-12-05 12:23:03.867723] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:19:33.317 [2024-12-05 12:23:03.967658] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:33.317 [2024-12-05 12:23:03.967680] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:19:33.317 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:33.317 cookie is 0 00:19:33.317 is_local: 1 00:19:33.317 our_own: 0 00:19:33.317 wide_area: 0 00:19:33.317 multicast: 1 00:19:33.317 cached: 1 00:19:33.317 [2024-12-05 12:23:04.067658] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:33.317 [2024-12-05 12:23:04.067681] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:19:33.317 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:33.317 cookie is 0 00:19:33.317 is_local: 1 00:19:33.317 our_own: 0 00:19:33.317 wide_area: 0 00:19:33.317 multicast: 1 00:19:33.317 cached: 1 00:19:33.317 [2024-12-05 12:23:04.067691] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:34.249 [2024-12-05 12:23:04.777334] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:19:34.249 [2024-12-05 12:23:04.777359] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:19:34.249 [2024-12-05 12:23:04.777379] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:19:34.249 [2024-12-05 12:23:04.863419] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:19:34.249 [2024-12-05 12:23:04.921756] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:19:34.249 [2024-12-05 12:23:04.922351] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x1f20e20:1 started. 00:19:34.249 [2024-12-05 12:23:04.924043] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:19:34.249 [2024-12-05 12:23:04.924071] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:19:34.249 [2024-12-05 12:23:04.926122] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x1f20e20 was disconnected and freed. delete nvme_qpair. 00:19:34.249 [2024-12-05 12:23:04.977202] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:34.249 [2024-12-05 12:23:04.977225] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:34.249 [2024-12-05 12:23:04.977259] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:34.249 [2024-12-05 12:23:05.065335] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:19:34.506 [2024-12-05 12:23:05.130622] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:19:34.506 [2024-12-05 12:23:05.131091] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1f32dd0:1 started. 00:19:34.506 [2024-12-05 12:23:05.132417] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:19:34.506 [2024-12-05 12:23:05.132443] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:34.506 [2024-12-05 12:23:05.136110] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1f32dd0 was disconnected and freed. delete nvme_qpair. 00:19:37.787 12:23:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:19:37.787 12:23:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:37.787 12:23:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.787 12:23:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.787 12:23:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:37.787 12:23:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:37.787 12:23:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.787 [2024-12-05 12:23:08.181511] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:19:37.787 2024/12/05 12:23:08 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:19:37.787 request: 00:19:37.787 { 00:19:37.787 "method": "bdev_nvme_start_mdns_discovery", 00:19:37.787 "params": { 00:19:37.787 "name": "cdc", 00:19:37.787 "svcname": "_nvme-disc._tcp", 00:19:37.787 "hostnqn": "nqn.2021-12.io.spdk:test" 00:19:37.787 } 00:19:37.787 } 00:19:37.787 Got JSON-RPC error response 00:19:37.787 GoRPCClient: error on JSON-RPC call 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:19:37.787 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:19:37.787 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:19:37.787 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:19:37.788 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:19:37.788 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:37.788 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:37.788 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:37.788 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.788 12:23:08 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:19:37.788 [2024-12-05 12:23:08.367667] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:19:38.724 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:19:38.724 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:38.724 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:19:38.724 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 94455 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 94455 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 94471 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.725 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:19:38.725 Got SIGTERM, quitting. 00:19:38.725 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:19:38.725 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:19:38.725 avahi-daemon 0.8 exiting. 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.984 rmmod nvme_tcp 00:19:38.984 rmmod nvme_fabrics 00:19:38.984 rmmod nvme_keyring 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 94417 ']' 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 94417 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 94417 ']' 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 94417 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94417 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:38.984 killing process with pid 94417 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94417' 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 94417 00:19:38.984 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 94417 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:39.243 12:23:09 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:39.243 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:39.243 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:39.243 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:39.243 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:39.243 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:39.243 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:39.243 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:39.243 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:19:39.502 00:19:39.502 real 0m21.798s 00:19:39.502 user 0m42.271s 00:19:39.502 sys 0m2.286s 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.502 ************************************ 00:19:39.502 END TEST nvmf_mdns_discovery 00:19:39.502 ************************************ 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.502 ************************************ 00:19:39.502 START TEST nvmf_host_multipath 00:19:39.502 ************************************ 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:39.502 * Looking for test storage... 00:19:39.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:19:39.502 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:39.762 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:39.762 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.762 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.762 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.762 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:39.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.763 --rc genhtml_branch_coverage=1 00:19:39.763 --rc genhtml_function_coverage=1 00:19:39.763 --rc genhtml_legend=1 00:19:39.763 --rc geninfo_all_blocks=1 00:19:39.763 --rc geninfo_unexecuted_blocks=1 00:19:39.763 00:19:39.763 ' 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:39.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.763 --rc genhtml_branch_coverage=1 00:19:39.763 --rc genhtml_function_coverage=1 00:19:39.763 --rc genhtml_legend=1 00:19:39.763 --rc geninfo_all_blocks=1 00:19:39.763 --rc geninfo_unexecuted_blocks=1 00:19:39.763 00:19:39.763 ' 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:39.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.763 --rc genhtml_branch_coverage=1 00:19:39.763 --rc genhtml_function_coverage=1 00:19:39.763 --rc genhtml_legend=1 00:19:39.763 --rc geninfo_all_blocks=1 00:19:39.763 --rc geninfo_unexecuted_blocks=1 00:19:39.763 00:19:39.763 ' 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:39.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.763 --rc genhtml_branch_coverage=1 00:19:39.763 --rc genhtml_function_coverage=1 00:19:39.763 --rc genhtml_legend=1 00:19:39.763 --rc geninfo_all_blocks=1 00:19:39.763 --rc geninfo_unexecuted_blocks=1 00:19:39.763 00:19:39.763 ' 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.763 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:39.763 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:39.764 Cannot find device "nvmf_init_br" 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:39.764 Cannot find device "nvmf_init_br2" 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:39.764 Cannot find device "nvmf_tgt_br" 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:39.764 Cannot find device "nvmf_tgt_br2" 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:39.764 Cannot find device "nvmf_init_br" 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:39.764 Cannot find device "nvmf_init_br2" 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:39.764 Cannot find device "nvmf_tgt_br" 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:39.764 Cannot find device "nvmf_tgt_br2" 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:39.764 Cannot find device "nvmf_br" 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:39.764 Cannot find device "nvmf_init_if" 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:39.764 Cannot find device "nvmf_init_if2" 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:39.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:39.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:39.764 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:40.023 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:40.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:40.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:19:40.024 00:19:40.024 --- 10.0.0.3 ping statistics --- 00:19:40.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.024 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:40.024 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:40.024 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:19:40.024 00:19:40.024 --- 10.0.0.4 ping statistics --- 00:19:40.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.024 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:40.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:40.024 00:19:40.024 --- 10.0.0.1 ping statistics --- 00:19:40.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.024 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:40.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:40.024 00:19:40.024 --- 10.0.0.2 ping statistics --- 00:19:40.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.024 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=95111 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 95111 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95111 ']' 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.024 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:40.282 [2024-12-05 12:23:10.904926] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:19:40.282 [2024-12-05 12:23:10.905028] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.282 [2024-12-05 12:23:11.059856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:40.282 [2024-12-05 12:23:11.115652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.282 [2024-12-05 12:23:11.115732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.282 [2024-12-05 12:23:11.115747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.282 [2024-12-05 12:23:11.115758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.282 [2024-12-05 12:23:11.115767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.282 [2024-12-05 12:23:11.117158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.282 [2024-12-05 12:23:11.117174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.541 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.541 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:19:40.541 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.541 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.541 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:40.541 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.541 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95111 00:19:40.541 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:40.800 [2024-12-05 12:23:11.592228] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.800 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:41.070 Malloc0 00:19:41.365 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:41.365 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:41.634 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:41.894 [2024-12-05 12:23:12.728871] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:41.894 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:42.461 [2024-12-05 12:23:13.025074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:42.461 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95201 00:19:42.461 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:42.461 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.461 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95201 /var/tmp/bdevperf.sock 00:19:42.461 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95201 ']' 00:19:42.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.461 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.461 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.461 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.461 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.461 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:42.719 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.719 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:19:42.719 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:42.977 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:43.235 Nvme0n1 00:19:43.235 12:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:43.801 Nvme0n1 00:19:43.801 12:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:19:43.801 12:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:44.736 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:19:44.736 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:44.994 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:45.252 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:19:45.252 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95111 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:45.252 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95275 00:19:45.252 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:51.813 12:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:51.813 12:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:51.813 Attaching 4 probes... 00:19:51.813 @path[10.0.0.3, 4421]: 18916 00:19:51.813 @path[10.0.0.3, 4421]: 19789 00:19:51.813 @path[10.0.0.3, 4421]: 19426 00:19:51.813 @path[10.0.0.3, 4421]: 19660 00:19:51.813 @path[10.0.0.3, 4421]: 19707 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95275 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:51.813 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:52.072 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:52.072 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95111 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:52.072 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95412 00:19:52.072 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:58.631 12:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:58.631 12:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:58.631 Attaching 4 probes... 00:19:58.631 @path[10.0.0.3, 4420]: 19568 00:19:58.631 @path[10.0.0.3, 4420]: 20130 00:19:58.631 @path[10.0.0.3, 4420]: 20084 00:19:58.631 @path[10.0.0.3, 4420]: 20704 00:19:58.631 @path[10.0.0.3, 4420]: 21122 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95412 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95541 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95111 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:58.631 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:05.191 Attaching 4 probes... 00:20:05.191 @path[10.0.0.3, 4421]: 13609 00:20:05.191 @path[10.0.0.3, 4421]: 19811 00:20:05.191 @path[10.0.0.3, 4421]: 20396 00:20:05.191 @path[10.0.0.3, 4421]: 19919 00:20:05.191 @path[10.0.0.3, 4421]: 19936 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95541 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:20:05.191 12:23:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:05.191 12:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:05.448 12:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:20:05.449 12:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95111 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:05.449 12:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95673 00:20:05.449 12:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:12.035 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:12.035 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:20:12.035 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:20:12.035 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:12.035 Attaching 4 probes... 00:20:12.035 00:20:12.035 00:20:12.036 00:20:12.036 00:20:12.036 00:20:12.036 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:12.036 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:12.036 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:12.036 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:20:12.036 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:20:12.036 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:20:12.036 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95673 00:20:12.036 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:12.036 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:20:12.036 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:12.294 12:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:12.294 12:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:20:12.294 12:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95803 00:20:12.294 12:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:12.294 12:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95111 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:18.848 Attaching 4 probes... 00:20:18.848 @path[10.0.0.3, 4421]: 19861 00:20:18.848 @path[10.0.0.3, 4421]: 19867 00:20:18.848 @path[10.0.0.3, 4421]: 19003 00:20:18.848 @path[10.0.0.3, 4421]: 19989 00:20:18.848 @path[10.0.0.3, 4421]: 20446 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95803 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:18.848 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:18.848 [2024-12-05 12:23:49.657559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.848 [2024-12-05 12:23:49.657788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.657994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 [2024-12-05 12:23:49.658270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2296e90 is same with the state(6) to be set 00:20:18.849 12:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:20.222 12:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:20.222 12:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95939 00:20:20.222 12:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:20.222 12:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95111 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:26.789 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:26.789 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:26.789 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:26.789 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:26.790 Attaching 4 probes... 00:20:26.790 @path[10.0.0.3, 4420]: 20290 00:20:26.790 @path[10.0.0.3, 4420]: 20329 00:20:26.790 @path[10.0.0.3, 4420]: 19975 00:20:26.790 @path[10.0.0.3, 4420]: 20444 00:20:26.790 @path[10.0.0.3, 4420]: 20461 00:20:26.790 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:26.790 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:26.790 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:26.790 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:26.790 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:26.790 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:26.790 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95939 00:20:26.790 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:26.790 12:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:26.790 [2024-12-05 12:23:57.257411] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:26.790 12:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:26.790 12:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:33.347 12:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:33.347 12:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96131 00:20:33.347 12:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:33.347 12:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95111 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:39.976 Attaching 4 probes... 00:20:39.976 @path[10.0.0.3, 4421]: 20410 00:20:39.976 @path[10.0.0.3, 4421]: 20765 00:20:39.976 @path[10.0.0.3, 4421]: 20866 00:20:39.976 @path[10.0.0.3, 4421]: 20668 00:20:39.976 @path[10.0.0.3, 4421]: 20892 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96131 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95201 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95201 ']' 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95201 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95201 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:39.976 killing process with pid 95201 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95201' 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95201 00:20:39.976 12:24:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95201 00:20:39.976 { 00:20:39.976 "results": [ 00:20:39.976 { 00:20:39.976 "job": "Nvme0n1", 00:20:39.976 "core_mask": "0x4", 00:20:39.976 "workload": "verify", 00:20:39.976 "status": "terminated", 00:20:39.976 "verify_range": { 00:20:39.976 "start": 0, 00:20:39.976 "length": 16384 00:20:39.976 }, 00:20:39.976 "queue_depth": 128, 00:20:39.976 "io_size": 4096, 00:20:39.976 "runtime": 55.299629, 00:20:39.976 "iops": 8591.269210865772, 00:20:39.976 "mibps": 33.55964535494442, 00:20:39.976 "io_failed": 0, 00:20:39.976 "io_timeout": 0, 00:20:39.977 "avg_latency_us": 14873.388428104372, 00:20:39.977 "min_latency_us": 117.29454545454546, 00:20:39.977 "max_latency_us": 7015926.69090909 00:20:39.977 } 00:20:39.977 ], 00:20:39.977 "core_count": 1 00:20:39.977 } 00:20:39.977 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95201 00:20:39.977 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:39.977 [2024-12-05 12:23:13.110684] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:20:39.977 [2024-12-05 12:23:13.110806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95201 ] 00:20:39.977 [2024-12-05 12:23:13.261970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.977 [2024-12-05 12:23:13.315130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.977 Running I/O for 90 seconds... 00:20:39.977 10298.00 IOPS, 40.23 MiB/s [2024-12-05T12:24:10.846Z] 10193.50 IOPS, 39.82 MiB/s [2024-12-05T12:24:10.846Z] 10025.33 IOPS, 39.16 MiB/s [2024-12-05T12:24:10.846Z] 9998.25 IOPS, 39.06 MiB/s [2024-12-05T12:24:10.846Z] 9935.60 IOPS, 38.81 MiB/s [2024-12-05T12:24:10.846Z] 9924.67 IOPS, 38.77 MiB/s [2024-12-05T12:24:10.846Z] 9915.29 IOPS, 38.73 MiB/s [2024-12-05T12:24:10.846Z] 9902.12 IOPS, 38.68 MiB/s [2024-12-05T12:24:10.846Z] [2024-12-05 12:23:22.707141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.707977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.707995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.708007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.708037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.708326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.708384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.708416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-12-05 12:23:22.708448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:39.977 [2024-12-05 12:23:22.708851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.977 [2024-12-05 12:23:22.708865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.708883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.708895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.708919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.708933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.708951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.978 [2024-12-05 12:23:22.708963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.708982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.978 [2024-12-05 12:23:22.708995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.709013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.978 [2024-12-05 12:23:22.709025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.709042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.978 [2024-12-05 12:23:22.709056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.978 [2024-12-05 12:23:22.711145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.711970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.711987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.712006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.712023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.712035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.712053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.712066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.712083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.712097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.712115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.712128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.712145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.712158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.712177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.712191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.712209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.712221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.712238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.712250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:39.978 [2024-12-05 12:23:22.712275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.978 [2024-12-05 12:23:22.712288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.712327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.712340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.712671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.712694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.712716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.712730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.712747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.712760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.712777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.712790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.712808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.712821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.712839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.712852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.712869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.712882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.712899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.712912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.712930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.712943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.712969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.712982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.979 [2024-12-05 12:23:22.713864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:39.979 [2024-12-05 12:23:22.713882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.713895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.713913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.713925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.713943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.713955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.713973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.713986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:22.714448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.980 [2024-12-05 12:23:22.714464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:39.980 9858.67 IOPS, 38.51 MiB/s [2024-12-05T12:24:10.849Z] 9884.60 IOPS, 38.61 MiB/s [2024-12-05T12:24:10.849Z] 9901.64 IOPS, 38.68 MiB/s [2024-12-05T12:24:10.849Z] 9924.42 IOPS, 38.77 MiB/s [2024-12-05T12:24:10.849Z] 9959.69 IOPS, 38.91 MiB/s [2024-12-05T12:24:10.849Z] 9995.14 IOPS, 39.04 MiB/s [2024-12-05T12:24:10.849Z] [2024-12-05 12:23:29.223516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.223565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.223808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.223832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.223855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.223870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.223909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.223924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.223941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.223954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.223971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.223984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.980 [2024-12-05 12:23:29.224436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:39.980 [2024-12-05 12:23:29.224454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.224483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.224513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.224543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.224573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.224605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.224634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.224664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.224708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.224737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.224767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.224796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.224827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.224840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.226758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.226783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.226810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.226824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.226847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.226859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.226882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.226896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.226918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.226931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.226953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.226966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.226988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.227011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.227049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.227084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.227118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.227153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.227187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.227222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.227281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.981 [2024-12-05 12:23:29.227336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.981 [2024-12-05 12:23:29.227373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.981 [2024-12-05 12:23:29.227410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.981 [2024-12-05 12:23:29.227446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.981 [2024-12-05 12:23:29.227484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:39.981 [2024-12-05 12:23:29.227516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.982 [2024-12-05 12:23:29.227530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:29.227553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.982 [2024-12-05 12:23:29.227567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:29.227594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.982 [2024-12-05 12:23:29.227613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:29.227637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.982 [2024-12-05 12:23:29.227664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:29.227686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.982 [2024-12-05 12:23:29.227700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:29.227722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.982 [2024-12-05 12:23:29.227734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:29.227756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.982 [2024-12-05 12:23:29.227769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:29.227791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.982 [2024-12-05 12:23:29.227803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:29.227825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.982 [2024-12-05 12:23:29.227838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:29.227860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.982 [2024-12-05 12:23:29.227873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:29.227896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.982 [2024-12-05 12:23:29.227909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:39.982 9781.93 IOPS, 38.21 MiB/s [2024-12-05T12:24:10.851Z] 9371.31 IOPS, 36.61 MiB/s [2024-12-05T12:24:10.851Z] 9404.35 IOPS, 36.74 MiB/s [2024-12-05T12:24:10.851Z] 9444.28 IOPS, 36.89 MiB/s [2024-12-05T12:24:10.851Z] 9463.11 IOPS, 36.97 MiB/s [2024-12-05T12:24:10.851Z] 9492.05 IOPS, 37.08 MiB/s [2024-12-05T12:24:10.851Z] 9496.00 IOPS, 37.09 MiB/s [2024-12-05T12:24:10.851Z] [2024-12-05 12:23:36.292690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.982 [2024-12-05 12:23:36.292767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.292807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.292825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.292844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.292860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.292877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.292889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.292906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.292919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.292935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.292948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.292965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.292977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.292994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:39.982 [2024-12-05 12:23:36.293878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.982 [2024-12-05 12:23:36.293891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.293971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.293990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.294975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.294993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.295005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.295024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.295036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.295054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.295067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.295086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.295105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.295124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.295137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.295156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.295168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.295187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.295199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:39.983 [2024-12-05 12:23:36.296929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.983 [2024-12-05 12:23:36.296953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:36.296979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.984 [2024-12-05 12:23:36.296993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:36.297016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.984 [2024-12-05 12:23:36.297030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:36.297053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.984 [2024-12-05 12:23:36.297065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:36.297088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.984 [2024-12-05 12:23:36.297101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:39.984 9381.05 IOPS, 36.64 MiB/s [2024-12-05T12:24:10.853Z] 8973.17 IOPS, 35.05 MiB/s [2024-12-05T12:24:10.853Z] 8599.29 IOPS, 33.59 MiB/s [2024-12-05T12:24:10.853Z] 8255.32 IOPS, 32.25 MiB/s [2024-12-05T12:24:10.853Z] 7937.81 IOPS, 31.01 MiB/s [2024-12-05T12:24:10.853Z] 7643.81 IOPS, 29.86 MiB/s [2024-12-05T12:24:10.853Z] 7370.82 IOPS, 28.79 MiB/s [2024-12-05T12:24:10.853Z] 7204.34 IOPS, 28.14 MiB/s [2024-12-05T12:24:10.853Z] 7298.07 IOPS, 28.51 MiB/s [2024-12-05T12:24:10.853Z] 7384.65 IOPS, 28.85 MiB/s [2024-12-05T12:24:10.853Z] 7449.34 IOPS, 29.10 MiB/s [2024-12-05T12:24:10.853Z] 7526.06 IOPS, 29.40 MiB/s [2024-12-05T12:24:10.853Z] 7604.85 IOPS, 29.71 MiB/s [2024-12-05T12:24:10.853Z] 7669.20 IOPS, 29.96 MiB/s [2024-12-05T12:24:10.853Z] [2024-12-05 12:23:49.658673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.984 [2024-12-05 12:23:49.658731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.658778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.984 [2024-12-05 12:23:49.658796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.658816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.984 [2024-12-05 12:23:49.658852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.658871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.984 [2024-12-05 12:23:49.658891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.658913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.984 [2024-12-05 12:23:49.658925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.658943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.984 [2024-12-05 12:23:49.658955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.658972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.984 [2024-12-05 12:23:49.658984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.984 [2024-12-05 12:23:49.659013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.984 [2024-12-05 12:23:49.659888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.984 [2024-12-05 12:23:49.659900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.659911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.659924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.659935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.659948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.659959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.659971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.659982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.659995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.985 [2024-12-05 12:23:49.660485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.985 [2024-12-05 12:23:49.660509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.985 [2024-12-05 12:23:49.660534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.985 [2024-12-05 12:23:49.660559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.985 [2024-12-05 12:23:49.660584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.985 [2024-12-05 12:23:49.660609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.985 [2024-12-05 12:23:49.660648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.985 [2024-12-05 12:23:49.660685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.985 [2024-12-05 12:23:49.660708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.985 [2024-12-05 12:23:49.660735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.985 [2024-12-05 12:23:49.660764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.985 [2024-12-05 12:23:49.660789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.985 [2024-12-05 12:23:49.660815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.985 [2024-12-05 12:23:49.660828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.660839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.660852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.660863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.660875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.660885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.660897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.660908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.660921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.660932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.660945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.660957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.660970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.660981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.660994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.986 [2024-12-05 12:23:49.661355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.986 [2024-12-05 12:23:49.661856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.986 [2024-12-05 12:23:49.661866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.661878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.661889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.661901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.661912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.661924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.661935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.661947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.661957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.661969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.661979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.661991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.662375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.662386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.663737] nvme_ctrlr.c:172 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.987 8:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:39.987 [2024-12-05 12:23:49.663830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.987 [2024-12-05 12:23:49.663851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.987 [2024-12-05 12:23:49.663880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1c1a0 (9): Bad file descriptor 00:20:39.987 [2024-12-05 12:23:49.664006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.987 [2024-12-05 12:23:49.664032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a1c1a0 with addr=10.0.0.3, port=4421 00:20:39.987 [2024-12-05 12:23:49.664046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1c1a0 is same with the state(6) to be set 00:20:39.987 [2024-12-05 12:23:49.664067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a1c1a0 (9): Bad file descriptor 00:20:39.987 [2024-12-05 12:23:49.664087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:39.987 [2024-12-05 12:23:49.664101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:39.987 [2024-12-05 12:23:49.664113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:39.987 [2024-12-05 12:23:49.664125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:39.987 [2024-12-05 12:23:49.664142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:39.987 7735.56 IOPS, 30.22 MiB/s [2024-12-05T12:24:10.856Z] 7790.49 IOPS, 30.43 MiB/s [2024-12-05T12:24:10.856Z] 7856.61 IOPS, 30.69 MiB/s [2024-12-05T12:24:10.856Z] 7916.10 IOPS, 30.92 MiB/s [2024-12-05T12:24:10.856Z] 7969.48 IOPS, 31.13 MiB/s [2024-12-05T12:24:10.856Z] 8026.44 IOPS, 31.35 MiB/s [2024-12-05T12:24:10.856Z] 8077.79 IOPS, 31.55 MiB/s [2024-12-05T12:24:10.856Z] 8117.95 IOPS, 31.71 MiB/s [2024-12-05T12:24:10.856Z] 8155.02 IOPS, 31.86 MiB/s [2024-12-05T12:24:10.856Z] 8193.31 IOPS, 32.01 MiB/s [2024-12-05T12:24:10.856Z] [2024-12-05 12:23:59.740575] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:39.987 8240.28 IOPS, 32.19 MiB/s [2024-12-05T12:24:10.856Z] 8289.70 IOPS, 32.38 MiB/s [2024-12-05T12:24:10.856Z] 8333.62 IOPS, 32.55 MiB/s [2024-12-05T12:24:10.856Z] 8374.84 IOPS, 32.71 MiB/s [2024-12-05T12:24:10.856Z] 8405.88 IOPS, 32.84 MiB/s [2024-12-05T12:24:10.856Z] 8444.00 IOPS, 32.98 MiB/s [2024-12-05T12:24:10.856Z] 8483.13 IOPS, 33.14 MiB/s [2024-12-05T12:24:10.856Z] 8519.43 IOPS, 33.28 MiB/s [2024-12-05T12:24:10.856Z] 8553.52 IOPS, 33.41 MiB/s [2024-12-05T12:24:10.856Z] 8586.49 IOPS, 33.54 MiB/s [2024-12-05T12:24:10.856Z] Received shutdown signal, test time was about 55.300269 seconds 00:20:39.987 00:20:39.987 Latency(us) 00:20:39.987 [2024-12-05T12:24:10.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.987 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:39.987 Verification LBA range: start 0x0 length 0x4000 00:20:39.987 Nvme0n1 : 55.30 8591.27 33.56 0.00 0.00 14873.39 117.29 7015926.69 00:20:39.987 [2024-12-05T12:24:10.856Z] =================================================================================================================== 00:20:39.987 [2024-12-05T12:24:10.856Z] Total : 8591.27 33.56 0.00 0.00 14873.39 117.29 7015926.69 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.987 rmmod nvme_tcp 00:20:39.987 rmmod nvme_fabrics 00:20:39.987 rmmod nvme_keyring 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 95111 ']' 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 95111 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95111 ']' 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95111 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:20:39.987 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95111 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.988 killing process with pid 95111 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95111' 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95111 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95111 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:39.988 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:20:40.247 00:20:40.247 real 1m0.696s 00:20:40.247 user 2m50.659s 00:20:40.247 sys 0m14.306s 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:40.247 ************************************ 00:20:40.247 END TEST nvmf_host_multipath 00:20:40.247 ************************************ 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.247 ************************************ 00:20:40.247 START TEST nvmf_timeout 00:20:40.247 ************************************ 00:20:40.247 12:24:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:40.247 * Looking for test storage... 00:20:40.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:40.247 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:40.247 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:20:40.247 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:40.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.507 --rc genhtml_branch_coverage=1 00:20:40.507 --rc genhtml_function_coverage=1 00:20:40.507 --rc genhtml_legend=1 00:20:40.507 --rc geninfo_all_blocks=1 00:20:40.507 --rc geninfo_unexecuted_blocks=1 00:20:40.507 00:20:40.507 ' 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:40.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.507 --rc genhtml_branch_coverage=1 00:20:40.507 --rc genhtml_function_coverage=1 00:20:40.507 --rc genhtml_legend=1 00:20:40.507 --rc geninfo_all_blocks=1 00:20:40.507 --rc geninfo_unexecuted_blocks=1 00:20:40.507 00:20:40.507 ' 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:40.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.507 --rc genhtml_branch_coverage=1 00:20:40.507 --rc genhtml_function_coverage=1 00:20:40.507 --rc genhtml_legend=1 00:20:40.507 --rc geninfo_all_blocks=1 00:20:40.507 --rc geninfo_unexecuted_blocks=1 00:20:40.507 00:20:40.507 ' 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:40.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.507 --rc genhtml_branch_coverage=1 00:20:40.507 --rc genhtml_function_coverage=1 00:20:40.507 --rc genhtml_legend=1 00:20:40.507 --rc geninfo_all_blocks=1 00:20:40.507 --rc geninfo_unexecuted_blocks=1 00:20:40.507 00:20:40.507 ' 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.507 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.508 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:40.508 Cannot find device "nvmf_init_br" 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:40.508 Cannot find device "nvmf_init_br2" 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:40.508 Cannot find device "nvmf_tgt_br" 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.508 Cannot find device "nvmf_tgt_br2" 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:40.508 Cannot find device "nvmf_init_br" 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:40.508 Cannot find device "nvmf_init_br2" 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:40.508 Cannot find device "nvmf_tgt_br" 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:40.508 Cannot find device "nvmf_tgt_br2" 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:40.508 Cannot find device "nvmf_br" 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:40.508 Cannot find device "nvmf_init_if" 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:40.508 Cannot find device "nvmf_init_if2" 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.508 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:40.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:20:40.768 00:20:40.768 --- 10.0.0.3 ping statistics --- 00:20:40.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.768 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:40.768 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:40.768 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:20:40.768 00:20:40.768 --- 10.0.0.4 ping statistics --- 00:20:40.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.768 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:40.768 00:20:40.768 --- 10.0.0.1 ping statistics --- 00:20:40.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.768 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:40.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:20:40.768 00:20:40.768 --- 10.0.0.2 ping statistics --- 00:20:40.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.768 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=96507 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 96507 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96507 ']' 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.768 12:24:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:41.027 [2024-12-05 12:24:11.661329] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:20:41.027 [2024-12-05 12:24:11.661445] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.027 [2024-12-05 12:24:11.820097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:41.027 [2024-12-05 12:24:11.875134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.027 [2024-12-05 12:24:11.875203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.027 [2024-12-05 12:24:11.875218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.027 [2024-12-05 12:24:11.875229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.027 [2024-12-05 12:24:11.875238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.027 [2024-12-05 12:24:11.876538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.027 [2024-12-05 12:24:11.876565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.963 12:24:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.963 12:24:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:41.963 12:24:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:41.963 12:24:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.963 12:24:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:41.963 12:24:12 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.963 12:24:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:41.963 12:24:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:42.222 [2024-12-05 12:24:12.924670] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.222 12:24:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:42.481 Malloc0 00:20:42.481 12:24:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.740 12:24:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:42.999 12:24:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:43.257 [2024-12-05 12:24:13.954243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:43.257 12:24:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96604 00:20:43.257 12:24:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:43.257 12:24:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96604 /var/tmp/bdevperf.sock 00:20:43.257 12:24:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96604 ']' 00:20:43.257 12:24:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.257 12:24:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.257 12:24:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.257 12:24:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.257 12:24:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:43.257 [2024-12-05 12:24:14.020772] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:20:43.257 [2024-12-05 12:24:14.020866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96604 ] 00:20:43.516 [2024-12-05 12:24:14.161462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.516 [2024-12-05 12:24:14.205265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.516 12:24:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.516 12:24:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:43.516 12:24:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:43.774 12:24:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:44.340 NVMe0n1 00:20:44.340 12:24:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96634 00:20:44.340 12:24:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:44.340 12:24:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:20:44.340 Running I/O for 10 seconds... 00:20:45.274 12:24:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:45.537 9604.00 IOPS, 37.52 MiB/s [2024-12-05T12:24:16.406Z] [2024-12-05 12:24:16.145996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.537 [2024-12-05 12:24:16.146059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.537 [2024-12-05 12:24:16.146419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.537 [2024-12-05 12:24:16.146429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.538 [2024-12-05 12:24:16.146746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.538 [2024-12-05 12:24:16.146768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.538 [2024-12-05 12:24:16.146788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.538 [2024-12-05 12:24:16.146808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.538 [2024-12-05 12:24:16.146828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.538 [2024-12-05 12:24:16.146847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.538 [2024-12-05 12:24:16.146867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.538 [2024-12-05 12:24:16.146887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.538 [2024-12-05 12:24:16.146927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.146978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.146988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.538 [2024-12-05 12:24:16.147428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.538 [2024-12-05 12:24:16.147440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.147984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.147994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.539 [2024-12-05 12:24:16.148388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.539 [2024-12-05 12:24:16.148398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.540 [2024-12-05 12:24:16.148423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.540 [2024-12-05 12:24:16.148447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.540 [2024-12-05 12:24:16.148478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.540 [2024-12-05 12:24:16.148502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.540 [2024-12-05 12:24:16.148525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.148572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87688 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.148583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.148608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.148618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87696 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.148628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.148688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.148697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87704 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.148706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.148724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.148732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87712 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.148741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.148762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.148770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87720 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.148779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.148805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.148813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87728 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.148822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.148840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.148848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86792 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.148878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.148896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.148904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86800 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.148912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.148930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.148937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86808 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.148946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.148963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.148971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86816 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.148979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.148989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.148997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.149004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86824 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.149028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.149038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.149045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.149052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86832 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.149077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.149087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.149095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.149102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86840 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.149111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.149120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.149128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.149136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86848 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.149145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.149154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.149161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.149169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86856 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.149182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.149193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.149200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.149208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86864 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.149217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.149242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.149250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.149259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86872 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.149269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.149278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.149303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.149330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86880 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.149341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.149351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.149360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.149369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86888 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.149379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.149389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.149410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.149422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86896 len:8 PRP1 0x0 PRP2 0x0 00:20:45.540 [2024-12-05 12:24:16.149431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.540 [2024-12-05 12:24:16.149442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.540 [2024-12-05 12:24:16.149451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.540 [2024-12-05 12:24:16.149460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86904 len:8 PRP1 0x0 PRP2 0x0 00:20:45.541 [2024-12-05 12:24:16.149470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.149480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.541 [2024-12-05 12:24:16.149489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.541 [2024-12-05 12:24:16.149497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86912 len:8 PRP1 0x0 PRP2 0x0 00:20:45.541 [2024-12-05 12:24:16.149507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.149518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.541 [2024-12-05 12:24:16.149527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.541 [2024-12-05 12:24:16.149536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86920 len:8 PRP1 0x0 PRP2 0x0 00:20:45.541 [2024-12-05 12:24:16.149561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.149572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.541 [2024-12-05 12:24:16.149581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.541 [2024-12-05 12:24:16.149590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86928 len:8 PRP1 0x0 PRP2 0x0 00:20:45.541 [2024-12-05 12:24:16.149600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.149610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.541 [2024-12-05 12:24:16.149618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.541 [2024-12-05 12:24:16.149627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86936 len:8 PRP1 0x0 PRP2 0x0 00:20:45.541 [2024-12-05 12:24:16.149637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.149647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.541 [2024-12-05 12:24:16.149660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.541 [2024-12-05 12:24:16.149685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86944 len:8 PRP1 0x0 PRP2 0x0 00:20:45.541 [2024-12-05 12:24:16.149710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.149720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.541 [2024-12-05 12:24:16.149728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.541 [2024-12-05 12:24:16.149736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86952 len:8 PRP1 0x0 PRP2 0x0 00:20:45.541 [2024-12-05 12:24:16.149746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.149755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.541 [2024-12-05 12:24:16.149763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.541 [2024-12-05 12:24:16.149771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86960 len:8 PRP1 0x0 PRP2 0x0 00:20:45.541 [2024-12-05 12:24:16.149781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.149791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:45.541 [2024-12-05 12:24:16.149798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:45.541 [2024-12-05 12:24:16.149806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86968 len:8 PRP1 0x0 PRP2 0x0 00:20:45.541 [2024-12-05 12:24:16.149815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.149967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.541 [2024-12-05 12:24:16.149993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.150007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.541 [2024-12-05 12:24:16.150017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.150028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.541 [2024-12-05 12:24:16.150037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.150048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.541 [2024-12-05 12:24:16.150058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.541 [2024-12-05 12:24:16.150068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aef50 is same with the state(6) to be set 00:20:45.541 [2024-12-05 12:24:16.150312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:45.541 [2024-12-05 12:24:16.150341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aef50 (9): Bad file descriptor 00:20:45.541 [2024-12-05 12:24:16.150454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:45.541 [2024-12-05 12:24:16.150479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16aef50 with addr=10.0.0.3, port=4420 00:20:45.541 [2024-12-05 12:24:16.150492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aef50 is same with the state(6) to be set 00:20:45.541 [2024-12-05 12:24:16.150513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aef50 (9): Bad file descriptor 00:20:45.541 [2024-12-05 12:24:16.150534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:45.541 [2024-12-05 12:24:16.150545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:45.541 [2024-12-05 12:24:16.150557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:45.541 [2024-12-05 12:24:16.150570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:45.541 [2024-12-05 12:24:16.150582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:45.541 12:24:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:20:47.416 5419.50 IOPS, 21.17 MiB/s [2024-12-05T12:24:18.285Z] 3613.00 IOPS, 14.11 MiB/s [2024-12-05T12:24:18.285Z] [2024-12-05 12:24:18.150683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.416 [2024-12-05 12:24:18.150751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16aef50 with addr=10.0.0.3, port=4420 00:20:47.416 [2024-12-05 12:24:18.150768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aef50 is same with the state(6) to be set 00:20:47.416 [2024-12-05 12:24:18.150792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aef50 (9): Bad file descriptor 00:20:47.416 [2024-12-05 12:24:18.150814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:47.416 [2024-12-05 12:24:18.150825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:47.416 [2024-12-05 12:24:18.150835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:47.416 [2024-12-05 12:24:18.150846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:47.416 [2024-12-05 12:24:18.150857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:47.416 12:24:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:20:47.416 12:24:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:47.416 12:24:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:47.675 12:24:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:20:47.675 12:24:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:20:47.675 12:24:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:47.675 12:24:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:47.934 12:24:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:20:47.934 12:24:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:20:49.568 2709.75 IOPS, 10.58 MiB/s [2024-12-05T12:24:20.437Z] 2167.80 IOPS, 8.47 MiB/s [2024-12-05T12:24:20.437Z] [2024-12-05 12:24:20.151053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:49.568 [2024-12-05 12:24:20.151108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16aef50 with addr=10.0.0.3, port=4420 00:20:49.568 [2024-12-05 12:24:20.151124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16aef50 is same with the state(6) to be set 00:20:49.568 [2024-12-05 12:24:20.151151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16aef50 (9): Bad file descriptor 00:20:49.568 [2024-12-05 12:24:20.151186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:49.568 [2024-12-05 12:24:20.151200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:49.568 [2024-12-05 12:24:20.151212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:49.568 [2024-12-05 12:24:20.151225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:49.568 [2024-12-05 12:24:20.151236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:51.435 1806.50 IOPS, 7.06 MiB/s [2024-12-05T12:24:22.304Z] 1548.43 IOPS, 6.05 MiB/s [2024-12-05T12:24:22.304Z] [2024-12-05 12:24:22.151262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:51.435 [2024-12-05 12:24:22.151347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:51.435 [2024-12-05 12:24:22.151360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:51.435 [2024-12-05 12:24:22.151369] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:20:51.435 [2024-12-05 12:24:22.151380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:52.477 1354.88 IOPS, 5.29 MiB/s 00:20:52.477 Latency(us) 00:20:52.477 [2024-12-05T12:24:23.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.477 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:52.477 Verification LBA range: start 0x0 length 0x4000 00:20:52.477 NVMe0n1 : 8.12 1334.07 5.21 15.75 0.00 94693.76 1824.58 7015926.69 00:20:52.477 [2024-12-05T12:24:23.346Z] =================================================================================================================== 00:20:52.477 [2024-12-05T12:24:23.346Z] Total : 1334.07 5.21 15.75 0.00 94693.76 1824.58 7015926.69 00:20:52.477 { 00:20:52.477 "results": [ 00:20:52.477 { 00:20:52.477 "job": "NVMe0n1", 00:20:52.477 "core_mask": "0x4", 00:20:52.477 "workload": "verify", 00:20:52.477 "status": "finished", 00:20:52.477 "verify_range": { 00:20:52.477 "start": 0, 00:20:52.477 "length": 16384 00:20:52.477 }, 00:20:52.477 "queue_depth": 128, 00:20:52.477 "io_size": 4096, 00:20:52.477 "runtime": 8.124744, 00:20:52.477 "iops": 1334.0728027861555, 00:20:52.477 "mibps": 5.21122188588342, 00:20:52.477 "io_failed": 128, 00:20:52.477 "io_timeout": 0, 00:20:52.477 "avg_latency_us": 94693.75849863641, 00:20:52.477 "min_latency_us": 1824.581818181818, 00:20:52.477 "max_latency_us": 7015926.69090909 00:20:52.477 } 00:20:52.477 ], 00:20:52.477 "core_count": 1 00:20:52.477 } 00:20:53.053 12:24:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:53.053 12:24:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:53.053 12:24:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:53.311 12:24:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:53.311 12:24:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:53.311 12:24:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:53.311 12:24:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:53.311 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:53.311 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96634 00:20:53.311 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96604 00:20:53.311 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96604 ']' 00:20:53.311 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96604 00:20:53.570 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:53.570 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.570 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96604 00:20:53.570 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:53.570 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:53.570 killing process with pid 96604 00:20:53.570 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96604' 00:20:53.570 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96604 00:20:53.570 Received shutdown signal, test time was about 9.182419 seconds 00:20:53.570 00:20:53.570 Latency(us) 00:20:53.570 [2024-12-05T12:24:24.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.570 [2024-12-05T12:24:24.439Z] =================================================================================================================== 00:20:53.570 [2024-12-05T12:24:24.439Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:53.570 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96604 00:20:53.828 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:53.828 [2024-12-05 12:24:24.645595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:53.828 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:53.828 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96791 00:20:53.828 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96791 /var/tmp/bdevperf.sock 00:20:53.828 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96791 ']' 00:20:53.828 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.828 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.828 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.828 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.828 12:24:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:54.086 [2024-12-05 12:24:24.708298] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:20:54.086 [2024-12-05 12:24:24.708386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96791 ] 00:20:54.086 [2024-12-05 12:24:24.844082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.086 [2024-12-05 12:24:24.890684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.344 12:24:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.344 12:24:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:54.344 12:24:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:54.602 12:24:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:54.859 NVMe0n1 00:20:54.859 12:24:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96822 00:20:54.859 12:24:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:54.859 12:24:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:54.859 Running I/O for 10 seconds... 00:20:55.790 12:24:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:56.050 9543.00 IOPS, 37.28 MiB/s [2024-12-05T12:24:26.919Z] [2024-12-05 12:24:26.807505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.050 [2024-12-05 12:24:26.807857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.807996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.808176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b96b0 is same with the state(6) to be set 00:20:56.051 [2024-12-05 12:24:26.809867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.051 [2024-12-05 12:24:26.809899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.809920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.051 [2024-12-05 12:24:26.809930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.809942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.051 [2024-12-05 12:24:26.809951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.809961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.051 [2024-12-05 12:24:26.809970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.809981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.051 [2024-12-05 12:24:26.809989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.051 [2024-12-05 12:24:26.810009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.051 [2024-12-05 12:24:26.810027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.051 [2024-12-05 12:24:26.810046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.051 [2024-12-05 12:24:26.810065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.051 [2024-12-05 12:24:26.810084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.051 [2024-12-05 12:24:26.810103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.051 [2024-12-05 12:24:26.810130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.051 [2024-12-05 12:24:26.810149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.051 [2024-12-05 12:24:26.810168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.051 [2024-12-05 12:24:26.810186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.051 [2024-12-05 12:24:26.810206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.051 [2024-12-05 12:24:26.810225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.051 [2024-12-05 12:24:26.810245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.051 [2024-12-05 12:24:26.810264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.051 [2024-12-05 12:24:26.810283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.051 [2024-12-05 12:24:26.810309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.810990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.810998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.811009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.811017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.811028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.811036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.811047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.052 [2024-12-05 12:24:26.811056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.811066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.052 [2024-12-05 12:24:26.811076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.811086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.052 [2024-12-05 12:24:26.811095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.811105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.052 [2024-12-05 12:24:26.811114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.811125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.052 [2024-12-05 12:24:26.811133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.811143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.052 [2024-12-05 12:24:26.811152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.052 [2024-12-05 12:24:26.811162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.053 [2024-12-05 12:24:26.811171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.053 [2024-12-05 12:24:26.811190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.053 [2024-12-05 12:24:26.811225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.053 [2024-12-05 12:24:26.811251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.811988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.811999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.812007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.812018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.812027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.812038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.053 [2024-12-05 12:24:26.812046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.053 [2024-12-05 12:24:26.812057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.054 [2024-12-05 12:24:26.812066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.054 [2024-12-05 12:24:26.812085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.054 [2024-12-05 12:24:26.812104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.054 [2024-12-05 12:24:26.812130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.054 [2024-12-05 12:24:26.812151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.054 [2024-12-05 12:24:26.812170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.054 [2024-12-05 12:24:26.812190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.054 [2024-12-05 12:24:26.812210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.054 [2024-12-05 12:24:26.812229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.054 [2024-12-05 12:24:26.812249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.054 [2024-12-05 12:24:26.812268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.054 [2024-12-05 12:24:26.812299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87656 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87664 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87672 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87680 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87688 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87696 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87704 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87712 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87720 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87728 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87736 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87744 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87752 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87760 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87768 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.812865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:56.054 [2024-12-05 12:24:26.812872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:56.054 [2024-12-05 12:24:26.812879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87776 len:8 PRP1 0x0 PRP2 0x0 00:20:56.054 [2024-12-05 12:24:26.812889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.054 [2024-12-05 12:24:26.813200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:56.054 [2024-12-05 12:24:26.813286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dcf50 (9): Bad file descriptor 00:20:56.054 [2024-12-05 12:24:26.813410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.054 [2024-12-05 12:24:26.813447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7dcf50 with addr=10.0.0.3, port=4420 00:20:56.055 [2024-12-05 12:24:26.813458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf50 is same with the state(6) to be set 00:20:56.055 [2024-12-05 12:24:26.813476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dcf50 (9): Bad file descriptor 00:20:56.055 [2024-12-05 12:24:26.813490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:56.055 [2024-12-05 12:24:26.813499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:56.055 [2024-12-05 12:24:26.813509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:56.055 [2024-12-05 12:24:26.813535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:56.055 [2024-12-05 12:24:26.813545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:56.055 12:24:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:56.986 5422.50 IOPS, 21.18 MiB/s [2024-12-05T12:24:27.855Z] [2024-12-05 12:24:27.813631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.986 [2024-12-05 12:24:27.813708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7dcf50 with addr=10.0.0.3, port=4420 00:20:56.986 [2024-12-05 12:24:27.813722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf50 is same with the state(6) to be set 00:20:56.986 [2024-12-05 12:24:27.813741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dcf50 (9): Bad file descriptor 00:20:56.987 [2024-12-05 12:24:27.813757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:56.987 [2024-12-05 12:24:27.813766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:56.987 [2024-12-05 12:24:27.813775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:56.987 [2024-12-05 12:24:27.813784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:56.987 [2024-12-05 12:24:27.813793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:56.987 12:24:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:57.244 [2024-12-05 12:24:28.073348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:57.244 12:24:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96822 00:20:58.068 3615.00 IOPS, 14.12 MiB/s [2024-12-05T12:24:28.937Z] [2024-12-05 12:24:28.824444] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:59.937 2711.25 IOPS, 10.59 MiB/s [2024-12-05T12:24:31.738Z] 3874.60 IOPS, 15.14 MiB/s [2024-12-05T12:24:33.109Z] 4919.83 IOPS, 19.22 MiB/s [2024-12-05T12:24:34.041Z] 5622.71 IOPS, 21.96 MiB/s [2024-12-05T12:24:34.971Z] 6162.25 IOPS, 24.07 MiB/s [2024-12-05T12:24:35.903Z] 6596.22 IOPS, 25.77 MiB/s [2024-12-05T12:24:35.903Z] 6951.60 IOPS, 27.15 MiB/s 00:21:05.034 Latency(us) 00:21:05.034 [2024-12-05T12:24:35.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.034 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:05.034 Verification LBA range: start 0x0 length 0x4000 00:21:05.034 NVMe0n1 : 10.01 6954.46 27.17 0.00 0.00 18375.51 1310.72 3019898.88 00:21:05.034 [2024-12-05T12:24:35.903Z] =================================================================================================================== 00:21:05.034 [2024-12-05T12:24:35.903Z] Total : 6954.46 27.17 0.00 0.00 18375.51 1310.72 3019898.88 00:21:05.034 { 00:21:05.034 "results": [ 00:21:05.034 { 00:21:05.034 "job": "NVMe0n1", 00:21:05.034 "core_mask": "0x4", 00:21:05.034 "workload": "verify", 00:21:05.034 "status": "finished", 00:21:05.034 "verify_range": { 00:21:05.034 "start": 0, 00:21:05.034 "length": 16384 00:21:05.034 }, 00:21:05.034 "queue_depth": 128, 00:21:05.034 "io_size": 4096, 00:21:05.034 "runtime": 10.005097, 00:21:05.034 "iops": 6954.45531412639, 00:21:05.034 "mibps": 27.16584107080621, 00:21:05.034 "io_failed": 0, 00:21:05.034 "io_timeout": 0, 00:21:05.034 "avg_latency_us": 18375.51177276647, 00:21:05.034 "min_latency_us": 1310.72, 00:21:05.034 "max_latency_us": 3019898.88 00:21:05.034 } 00:21:05.034 ], 00:21:05.034 "core_count": 1 00:21:05.034 } 00:21:05.034 12:24:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96946 00:21:05.034 12:24:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:05.034 12:24:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:21:05.034 Running I/O for 10 seconds... 00:21:05.967 12:24:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:06.228 9826.00 IOPS, 38.38 MiB/s [2024-12-05T12:24:37.097Z] [2024-12-05 12:24:36.960226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.228 [2024-12-05 12:24:36.960330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.228 [2024-12-05 12:24:36.960341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.228 [2024-12-05 12:24:36.960350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.228 [2024-12-05 12:24:36.960358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.228 [2024-12-05 12:24:36.960365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.228 [2024-12-05 12:24:36.960373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.228 [2024-12-05 12:24:36.960380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.228 [2024-12-05 12:24:36.960388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.228 [2024-12-05 12:24:36.960395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.228 [2024-12-05 12:24:36.960402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.960855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7d40 is same with the state(6) to be set 00:21:06.229 [2024-12-05 12:24:36.962998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.229 [2024-12-05 12:24:36.963038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.229 [2024-12-05 12:24:36.963060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.229 [2024-12-05 12:24:36.963070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.229 [2024-12-05 12:24:36.963082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.229 [2024-12-05 12:24:36.963091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.229 [2024-12-05 12:24:36.963102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.229 [2024-12-05 12:24:36.963112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.229 [2024-12-05 12:24:36.963139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.229 [2024-12-05 12:24:36.963148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.229 [2024-12-05 12:24:36.963159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.229 [2024-12-05 12:24:36.963168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.229 [2024-12-05 12:24:36.963179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.229 [2024-12-05 12:24:36.963188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.229 [2024-12-05 12:24:36.963211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.229 [2024-12-05 12:24:36.963220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.229 [2024-12-05 12:24:36.963230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.229 [2024-12-05 12:24:36.963255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.229 [2024-12-05 12:24:36.963266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.229 [2024-12-05 12:24:36.963297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.963982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.963992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.964002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.964013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.964022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.964033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.964042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.964053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.964062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.964073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.964081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.964092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.964101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.964112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.230 [2024-12-05 12:24:36.964121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.230 [2024-12-05 12:24:36.964131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.231 [2024-12-05 12:24:36.964435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.231 [2024-12-05 12:24:36.964930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.231 [2024-12-05 12:24:36.964940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.964949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.964960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.964968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.964979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.964989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.964999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.232 [2024-12-05 12:24:36.965421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.232 [2024-12-05 12:24:36.965461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89968 len:8 PRP1 0x0 PRP2 0x0 00:21:06.232 [2024-12-05 12:24:36.965470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.232 [2024-12-05 12:24:36.965490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.232 [2024-12-05 12:24:36.965499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89976 len:8 PRP1 0x0 PRP2 0x0 00:21:06.232 [2024-12-05 12:24:36.965508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.232 [2024-12-05 12:24:36.965524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.232 [2024-12-05 12:24:36.965532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89984 len:8 PRP1 0x0 PRP2 0x0 00:21:06.232 [2024-12-05 12:24:36.965541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.232 [2024-12-05 12:24:36.965557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.232 [2024-12-05 12:24:36.965565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89992 len:8 PRP1 0x0 PRP2 0x0 00:21:06.232 [2024-12-05 12:24:36.965573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.232 [2024-12-05 12:24:36.965589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.232 [2024-12-05 12:24:36.965596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90000 len:8 PRP1 0x0 PRP2 0x0 00:21:06.232 [2024-12-05 12:24:36.965605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.232 [2024-12-05 12:24:36.965621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.232 [2024-12-05 12:24:36.965629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90008 len:8 PRP1 0x0 PRP2 0x0 00:21:06.232 [2024-12-05 12:24:36.965637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.232 [2024-12-05 12:24:36.965660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.232 [2024-12-05 12:24:36.965668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90016 len:8 PRP1 0x0 PRP2 0x0 00:21:06.232 [2024-12-05 12:24:36.965677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.232 [2024-12-05 12:24:36.965693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.232 [2024-12-05 12:24:36.965706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90024 len:8 PRP1 0x0 PRP2 0x0 00:21:06.232 [2024-12-05 12:24:36.965715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.232 [2024-12-05 12:24:36.965731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.232 [2024-12-05 12:24:36.965739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90032 len:8 PRP1 0x0 PRP2 0x0 00:21:06.232 [2024-12-05 12:24:36.965747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.232 [2024-12-05 12:24:36.965764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.232 [2024-12-05 12:24:36.965772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90040 len:8 PRP1 0x0 PRP2 0x0 00:21:06.232 [2024-12-05 12:24:36.965780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.232 [2024-12-05 12:24:36.965789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.232 [2024-12-05 12:24:36.965796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.233 [2024-12-05 12:24:36.965804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90048 len:8 PRP1 0x0 PRP2 0x0 00:21:06.233 [2024-12-05 12:24:36.965812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.233 [2024-12-05 12:24:36.965821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.233 [2024-12-05 12:24:36.965828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.233 [2024-12-05 12:24:36.965835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90056 len:8 PRP1 0x0 PRP2 0x0 00:21:06.233 [2024-12-05 12:24:36.965844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.233 [2024-12-05 12:24:36.965853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.233 [2024-12-05 12:24:36.965860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.233 [2024-12-05 12:24:36.965867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90064 len:8 PRP1 0x0 PRP2 0x0 00:21:06.233 [2024-12-05 12:24:36.965876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.233 [2024-12-05 12:24:36.965885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.233 [2024-12-05 12:24:36.965892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.233 [2024-12-05 12:24:36.965899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90072 len:8 PRP1 0x0 PRP2 0x0 00:21:06.233 [2024-12-05 12:24:36.965908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.233 12:24:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:21:06.233 [2024-12-05 12:24:36.982336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.233 [2024-12-05 12:24:36.982369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.233 [2024-12-05 12:24:36.982380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90080 len:8 PRP1 0x0 PRP2 0x0 00:21:06.233 [2024-12-05 12:24:36.982391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.233 [2024-12-05 12:24:36.982401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.233 [2024-12-05 12:24:36.982408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.233 [2024-12-05 12:24:36.982416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90088 len:8 PRP1 0x0 PRP2 0x0 00:21:06.233 [2024-12-05 12:24:36.982425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.233 [2024-12-05 12:24:36.982622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.233 [2024-12-05 12:24:36.982659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.233 [2024-12-05 12:24:36.982675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.233 [2024-12-05 12:24:36.982687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.233 [2024-12-05 12:24:36.982700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.233 [2024-12-05 12:24:36.982712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.233 [2024-12-05 12:24:36.982725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.233 [2024-12-05 12:24:36.982737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.233 [2024-12-05 12:24:36.982750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf50 is same with the state(6) to be set 00:21:06.233 [2024-12-05 12:24:36.983075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:06.233 [2024-12-05 12:24:36.983124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dcf50 (9): Bad file descriptor 00:21:06.233 [2024-12-05 12:24:36.983337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.233 [2024-12-05 12:24:36.983380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7dcf50 with addr=10.0.0.3, port=4420 00:21:06.233 [2024-12-05 12:24:36.983396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf50 is same with the state(6) to be set 00:21:06.233 [2024-12-05 12:24:36.983422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dcf50 (9): Bad file descriptor 00:21:06.233 [2024-12-05 12:24:36.983451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:06.233 [2024-12-05 12:24:36.983465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:06.233 [2024-12-05 12:24:36.983495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:06.233 [2024-12-05 12:24:36.983518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:06.233 [2024-12-05 12:24:36.983533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:07.168 5567.00 IOPS, 21.75 MiB/s [2024-12-05T12:24:38.037Z] [2024-12-05 12:24:37.983662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.168 [2024-12-05 12:24:37.983737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7dcf50 with addr=10.0.0.3, port=4420 00:21:07.168 [2024-12-05 12:24:37.983749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf50 is same with the state(6) to be set 00:21:07.168 [2024-12-05 12:24:37.983768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dcf50 (9): Bad file descriptor 00:21:07.168 [2024-12-05 12:24:37.983782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:07.168 [2024-12-05 12:24:37.983792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:07.168 [2024-12-05 12:24:37.983800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:07.168 [2024-12-05 12:24:37.983809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:07.168 [2024-12-05 12:24:37.983819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:08.362 3711.33 IOPS, 14.50 MiB/s [2024-12-05T12:24:39.231Z] [2024-12-05 12:24:38.983887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.362 [2024-12-05 12:24:38.983957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7dcf50 with addr=10.0.0.3, port=4420 00:21:08.362 [2024-12-05 12:24:38.983970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf50 is same with the state(6) to be set 00:21:08.362 [2024-12-05 12:24:38.983987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dcf50 (9): Bad file descriptor 00:21:08.362 [2024-12-05 12:24:38.984001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:08.362 [2024-12-05 12:24:38.984009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:08.362 [2024-12-05 12:24:38.984018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:08.362 [2024-12-05 12:24:38.984026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:08.362 [2024-12-05 12:24:38.984035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:09.297 2783.50 IOPS, 10.87 MiB/s [2024-12-05T12:24:40.166Z] 12:24:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:09.297 [2024-12-05 12:24:39.987266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.297 [2024-12-05 12:24:39.987381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7dcf50 with addr=10.0.0.3, port=4420 00:21:09.297 [2024-12-05 12:24:39.987395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dcf50 is same with the state(6) to be set 00:21:09.297 [2024-12-05 12:24:39.987665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7dcf50 (9): Bad file descriptor 00:21:09.297 [2024-12-05 12:24:39.987945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:21:09.297 [2024-12-05 12:24:39.987967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:21:09.297 [2024-12-05 12:24:39.987978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:09.297 [2024-12-05 12:24:39.987987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:21:09.297 [2024-12-05 12:24:39.987997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:09.555 [2024-12-05 12:24:40.231854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:09.555 12:24:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96946 00:21:10.380 2226.80 IOPS, 8.70 MiB/s [2024-12-05T12:24:41.249Z] [2024-12-05 12:24:41.011594] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:21:12.248 3244.67 IOPS, 12.67 MiB/s [2024-12-05T12:24:44.051Z] 4264.57 IOPS, 16.66 MiB/s [2024-12-05T12:24:44.983Z] 4998.12 IOPS, 19.52 MiB/s [2024-12-05T12:24:45.918Z] 5589.67 IOPS, 21.83 MiB/s [2024-12-05T12:24:45.918Z] 6067.40 IOPS, 23.70 MiB/s 00:21:15.049 Latency(us) 00:21:15.049 [2024-12-05T12:24:45.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.049 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:15.049 Verification LBA range: start 0x0 length 0x4000 00:21:15.049 NVMe0n1 : 10.01 6075.02 23.73 4364.08 0.00 12235.75 703.77 3035150.89 00:21:15.049 [2024-12-05T12:24:45.918Z] =================================================================================================================== 00:21:15.049 [2024-12-05T12:24:45.918Z] Total : 6075.02 23.73 4364.08 0.00 12235.75 0.00 3035150.89 00:21:15.049 { 00:21:15.049 "results": [ 00:21:15.049 { 00:21:15.049 "job": "NVMe0n1", 00:21:15.049 "core_mask": "0x4", 00:21:15.049 "workload": "verify", 00:21:15.049 "status": "finished", 00:21:15.049 "verify_range": { 00:21:15.049 "start": 0, 00:21:15.049 "length": 16384 00:21:15.049 }, 00:21:15.049 "queue_depth": 128, 00:21:15.049 "io_size": 4096, 00:21:15.049 "runtime": 10.008519, 00:21:15.049 "iops": 6075.024686469596, 00:21:15.049 "mibps": 23.73056518152186, 00:21:15.049 "io_failed": 43678, 00:21:15.049 "io_timeout": 0, 00:21:15.049 "avg_latency_us": 12235.749608798551, 00:21:15.049 "min_latency_us": 703.7672727272727, 00:21:15.049 "max_latency_us": 3035150.8945454545 00:21:15.049 } 00:21:15.049 ], 00:21:15.049 "core_count": 1 00:21:15.049 } 00:21:15.049 12:24:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96791 00:21:15.049 12:24:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96791 ']' 00:21:15.049 12:24:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96791 00:21:15.049 12:24:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:15.049 12:24:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.049 12:24:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96791 00:21:15.049 killing process with pid 96791 00:21:15.049 Received shutdown signal, test time was about 10.000000 seconds 00:21:15.049 00:21:15.049 Latency(us) 00:21:15.049 [2024-12-05T12:24:45.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.049 [2024-12-05T12:24:45.918Z] =================================================================================================================== 00:21:15.049 [2024-12-05T12:24:45.918Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.049 12:24:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:15.049 12:24:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:15.049 12:24:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96791' 00:21:15.049 12:24:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96791 00:21:15.049 12:24:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96791 00:21:15.307 12:24:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:15.307 12:24:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97067 00:21:15.307 12:24:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97067 /var/tmp/bdevperf.sock 00:21:15.307 12:24:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97067 ']' 00:21:15.307 12:24:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.307 12:24:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.307 12:24:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.307 12:24:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.307 12:24:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:15.307 [2024-12-05 12:24:46.150636] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:21:15.307 [2024-12-05 12:24:46.150773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97067 ] 00:21:15.565 [2024-12-05 12:24:46.294567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.565 [2024-12-05 12:24:46.346013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.498 12:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.498 12:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:16.498 12:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97095 00:21:16.498 12:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97067 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:16.498 12:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:16.781 12:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:17.040 NVMe0n1 00:21:17.040 12:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:17.040 12:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97153 00:21:17.040 12:24:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:17.040 Running I/O for 10 seconds... 00:21:17.976 12:24:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:18.238 19248.00 IOPS, 75.19 MiB/s [2024-12-05T12:24:49.107Z] [2024-12-05 12:24:49.049250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.238 [2024-12-05 12:24:49.049657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.049990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ba8f0 is same with the state(6) to be set 00:21:18.239 [2024-12-05 12:24:49.050502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.050989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.050999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.051008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.051019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.051027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.051038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.051047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.051059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.051068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.051078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.051087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.051098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.051106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.051117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.051126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.051137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.051145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.240 [2024-12-05 12:24:49.051156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.240 [2024-12-05 12:24:49.051165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.241 [2024-12-05 12:24:49.051846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.241 [2024-12-05 12:24:49.051857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.051867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.051878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.051886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.051897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.051905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.051916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.051925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.051935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.051944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.051955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.051963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.051975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.051983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.051993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.242 [2024-12-05 12:24:49.052452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.242 [2024-12-05 12:24:49.052460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.052988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.052996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.053007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.053015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.053026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.053034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.053045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.053054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.053069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.243 [2024-12-05 12:24:49.053078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.243 [2024-12-05 12:24:49.053088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-12-05 12:24:49.053097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.244 [2024-12-05 12:24:49.053108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-12-05 12:24:49.053117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.244 [2024-12-05 12:24:49.053127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-12-05 12:24:49.053136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.244 [2024-12-05 12:24:49.053146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:18.244 [2024-12-05 12:24:49.053155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.244 [2024-12-05 12:24:49.053165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c1820 is same with the state(6) to be set 00:21:18.244 [2024-12-05 12:24:49.053177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:18.244 [2024-12-05 12:24:49.053185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:18.244 [2024-12-05 12:24:49.053193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3984 len:8 PRP1 0x0 PRP2 0x0 00:21:18.244 [2024-12-05 12:24:49.053206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.244 [2024-12-05 12:24:49.053347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.244 [2024-12-05 12:24:49.053375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.244 [2024-12-05 12:24:49.053387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.244 [2024-12-05 12:24:49.053396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.244 [2024-12-05 12:24:49.053406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.244 [2024-12-05 12:24:49.053415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.244 [2024-12-05 12:24:49.053425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.244 [2024-12-05 12:24:49.053433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.244 [2024-12-05 12:24:49.053441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2055f50 is same with the state(6) to be set 00:21:18.244 [2024-12-05 12:24:49.053688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:18.244 [2024-12-05 12:24:49.053737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2055f50 (9): Bad file descriptor 00:21:18.244 [2024-12-05 12:24:49.053861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.244 [2024-12-05 12:24:49.053898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2055f50 with addr=10.0.0.3, port=4420 00:21:18.244 [2024-12-05 12:24:49.053911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2055f50 is same with the state(6) to be set 00:21:18.244 [2024-12-05 12:24:49.053930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2055f50 (9): Bad file descriptor 00:21:18.244 [2024-12-05 12:24:49.053946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:18.244 [2024-12-05 12:24:49.053955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:18.244 [2024-12-05 12:24:49.053973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:18.244 [2024-12-05 12:24:49.053984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:18.244 [2024-12-05 12:24:49.053995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:18.244 12:24:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 97153 00:21:20.116 11385.50 IOPS, 44.47 MiB/s [2024-12-05T12:24:51.243Z] 7590.33 IOPS, 29.65 MiB/s [2024-12-05T12:24:51.243Z] [2024-12-05 12:24:51.054136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.374 [2024-12-05 12:24:51.054217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2055f50 with addr=10.0.0.3, port=4420 00:21:20.374 [2024-12-05 12:24:51.054233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2055f50 is same with the state(6) to be set 00:21:20.374 [2024-12-05 12:24:51.054254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2055f50 (9): Bad file descriptor 00:21:20.374 [2024-12-05 12:24:51.054271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:20.374 [2024-12-05 12:24:51.054316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:20.374 [2024-12-05 12:24:51.054328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:20.374 [2024-12-05 12:24:51.054338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:20.374 [2024-12-05 12:24:51.054349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:22.244 5692.75 IOPS, 22.24 MiB/s [2024-12-05T12:24:53.113Z] 4554.20 IOPS, 17.79 MiB/s [2024-12-05T12:24:53.113Z] [2024-12-05 12:24:53.054505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.244 [2024-12-05 12:24:53.054586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2055f50 with addr=10.0.0.3, port=4420 00:21:22.244 [2024-12-05 12:24:53.054602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2055f50 is same with the state(6) to be set 00:21:22.244 [2024-12-05 12:24:53.054624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2055f50 (9): Bad file descriptor 00:21:22.244 [2024-12-05 12:24:53.054642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:22.244 [2024-12-05 12:24:53.054652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:22.244 [2024-12-05 12:24:53.054662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:22.244 [2024-12-05 12:24:53.054674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:22.244 [2024-12-05 12:24:53.054684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:24.116 3795.17 IOPS, 14.82 MiB/s [2024-12-05T12:24:55.244Z] 3253.00 IOPS, 12.71 MiB/s [2024-12-05T12:24:55.244Z] [2024-12-05 12:24:55.054762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:24.375 [2024-12-05 12:24:55.054810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:21:24.375 [2024-12-05 12:24:55.054820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:21:24.375 [2024-12-05 12:24:55.054830] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:21:24.375 [2024-12-05 12:24:55.054841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:21:25.305 2846.38 IOPS, 11.12 MiB/s 00:21:25.305 Latency(us) 00:21:25.305 [2024-12-05T12:24:56.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.305 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:25.305 NVMe0n1 : 8.20 2777.65 10.85 15.61 0.00 45778.50 1951.19 7015926.69 00:21:25.305 [2024-12-05T12:24:56.174Z] =================================================================================================================== 00:21:25.305 [2024-12-05T12:24:56.174Z] Total : 2777.65 10.85 15.61 0.00 45778.50 1951.19 7015926.69 00:21:25.305 { 00:21:25.305 "results": [ 00:21:25.305 { 00:21:25.305 "job": "NVMe0n1", 00:21:25.305 "core_mask": "0x4", 00:21:25.305 "workload": "randread", 00:21:25.305 "status": "finished", 00:21:25.305 "queue_depth": 128, 00:21:25.305 "io_size": 4096, 00:21:25.305 "runtime": 8.197925, 00:21:25.305 "iops": 2777.6541014951954, 00:21:25.305 "mibps": 10.850211333965607, 00:21:25.305 "io_failed": 128, 00:21:25.305 "io_timeout": 0, 00:21:25.305 "avg_latency_us": 45778.50318513313, 00:21:25.305 "min_latency_us": 1951.1854545454546, 00:21:25.305 "max_latency_us": 7015926.69090909 00:21:25.305 } 00:21:25.305 ], 00:21:25.305 "core_count": 1 00:21:25.305 } 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:25.305 Attaching 5 probes... 00:21:25.305 1491.114678: reset bdev controller NVMe0 00:21:25.305 1491.211165: reconnect bdev controller NVMe0 00:21:25.305 3491.459500: reconnect delay bdev controller NVMe0 00:21:25.305 3491.495777: reconnect bdev controller NVMe0 00:21:25.305 5491.827808: reconnect delay bdev controller NVMe0 00:21:25.305 5491.863103: reconnect bdev controller NVMe0 00:21:25.305 7492.162214: reconnect delay bdev controller NVMe0 00:21:25.305 7492.179711: reconnect bdev controller NVMe0 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 97095 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97067 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97067 ']' 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97067 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97067 00:21:25.305 killing process with pid 97067 00:21:25.305 Received shutdown signal, test time was about 8.268839 seconds 00:21:25.305 00:21:25.305 Latency(us) 00:21:25.305 [2024-12-05T12:24:56.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.305 [2024-12-05T12:24:56.174Z] =================================================================================================================== 00:21:25.305 [2024-12-05T12:24:56.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97067' 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97067 00:21:25.305 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97067 00:21:25.562 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.820 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:25.820 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:25.820 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.820 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:21:25.820 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.820 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:21:25.820 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.820 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.820 rmmod nvme_tcp 00:21:25.820 rmmod nvme_fabrics 00:21:26.078 rmmod nvme_keyring 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 96507 ']' 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 96507 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96507 ']' 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96507 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96507 00:21:26.078 killing process with pid 96507 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96507' 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96507 00:21:26.078 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96507 00:21:26.337 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:26.337 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:26.337 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:26.337 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:21:26.337 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:26.337 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:21:26.337 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:21:26.337 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:26.337 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:26.337 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:26.337 12:24:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.337 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.338 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.597 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:21:26.597 ************************************ 00:21:26.597 END TEST nvmf_timeout 00:21:26.597 ************************************ 00:21:26.597 00:21:26.597 real 0m46.242s 00:21:26.597 user 2m14.732s 00:21:26.597 sys 0m4.991s 00:21:26.597 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.597 12:24:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:26.597 12:24:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:21:26.597 12:24:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:26.597 00:21:26.597 real 5m32.086s 00:21:26.597 user 14m4.868s 00:21:26.597 sys 1m5.964s 00:21:26.597 12:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.597 12:24:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.597 ************************************ 00:21:26.597 END TEST nvmf_host 00:21:26.597 ************************************ 00:21:26.597 12:24:57 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:21:26.597 12:24:57 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:21:26.597 12:24:57 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:21:26.597 12:24:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:26.597 12:24:57 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.597 12:24:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:26.597 ************************************ 00:21:26.597 START TEST nvmf_target_core_interrupt_mode 00:21:26.597 ************************************ 00:21:26.597 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:21:26.597 * Looking for test storage... 00:21:26.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:21:26.597 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:26.597 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:21:26.597 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:26.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.857 --rc genhtml_branch_coverage=1 00:21:26.857 --rc genhtml_function_coverage=1 00:21:26.857 --rc genhtml_legend=1 00:21:26.857 --rc geninfo_all_blocks=1 00:21:26.857 --rc geninfo_unexecuted_blocks=1 00:21:26.857 00:21:26.857 ' 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:26.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.857 --rc genhtml_branch_coverage=1 00:21:26.857 --rc genhtml_function_coverage=1 00:21:26.857 --rc genhtml_legend=1 00:21:26.857 --rc geninfo_all_blocks=1 00:21:26.857 --rc geninfo_unexecuted_blocks=1 00:21:26.857 00:21:26.857 ' 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:26.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.857 --rc genhtml_branch_coverage=1 00:21:26.857 --rc genhtml_function_coverage=1 00:21:26.857 --rc genhtml_legend=1 00:21:26.857 --rc geninfo_all_blocks=1 00:21:26.857 --rc geninfo_unexecuted_blocks=1 00:21:26.857 00:21:26.857 ' 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:26.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.857 --rc genhtml_branch_coverage=1 00:21:26.857 --rc genhtml_function_coverage=1 00:21:26.857 --rc genhtml_legend=1 00:21:26.857 --rc geninfo_all_blocks=1 00:21:26.857 --rc geninfo_unexecuted_blocks=1 00:21:26.857 00:21:26.857 ' 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:21:26.857 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:21:26.858 ************************************ 00:21:26.858 START TEST nvmf_abort 00:21:26.858 ************************************ 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:21:26.858 * Looking for test storage... 00:21:26.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:21:26.858 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:27.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.118 --rc genhtml_branch_coverage=1 00:21:27.118 --rc genhtml_function_coverage=1 00:21:27.118 --rc genhtml_legend=1 00:21:27.118 --rc geninfo_all_blocks=1 00:21:27.118 --rc geninfo_unexecuted_blocks=1 00:21:27.118 00:21:27.118 ' 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:27.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.118 --rc genhtml_branch_coverage=1 00:21:27.118 --rc genhtml_function_coverage=1 00:21:27.118 --rc genhtml_legend=1 00:21:27.118 --rc geninfo_all_blocks=1 00:21:27.118 --rc geninfo_unexecuted_blocks=1 00:21:27.118 00:21:27.118 ' 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:27.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.118 --rc genhtml_branch_coverage=1 00:21:27.118 --rc genhtml_function_coverage=1 00:21:27.118 --rc genhtml_legend=1 00:21:27.118 --rc geninfo_all_blocks=1 00:21:27.118 --rc geninfo_unexecuted_blocks=1 00:21:27.118 00:21:27.118 ' 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:27.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.118 --rc genhtml_branch_coverage=1 00:21:27.118 --rc genhtml_function_coverage=1 00:21:27.118 --rc genhtml_legend=1 00:21:27.118 --rc geninfo_all_blocks=1 00:21:27.118 --rc geninfo_unexecuted_blocks=1 00:21:27.118 00:21:27.118 ' 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.118 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:27.119 Cannot find device "nvmf_init_br" 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:27.119 Cannot find device "nvmf_init_br2" 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:27.119 Cannot find device "nvmf_tgt_br" 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:27.119 Cannot find device "nvmf_tgt_br2" 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:27.119 Cannot find device "nvmf_init_br" 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:27.119 Cannot find device "nvmf_init_br2" 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:27.119 Cannot find device "nvmf_tgt_br" 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:27.119 Cannot find device "nvmf_tgt_br2" 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:27.119 Cannot find device "nvmf_br" 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:27.119 Cannot find device "nvmf_init_if" 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:27.119 Cannot find device "nvmf_init_if2" 00:21:27.119 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:21:27.120 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:27.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:27.120 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:21:27.120 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:27.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:27.120 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:21:27.120 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:27.120 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:27.120 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:27.120 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:27.120 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:27.120 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:27.379 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:27.379 12:24:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:27.379 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:27.379 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:21:27.379 00:21:27.379 --- 10.0.0.3 ping statistics --- 00:21:27.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.379 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:27.379 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:27.379 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:21:27.379 00:21:27.379 --- 10.0.0.4 ping statistics --- 00:21:27.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.379 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:27.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:27.379 00:21:27.379 --- 10.0.0.1 ping statistics --- 00:21:27.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.379 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:27.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:21:27.379 00:21:27.379 --- 10.0.0.2 ping statistics --- 00:21:27.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.379 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=97578 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 97578 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 97578 ']' 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.379 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:27.639 [2024-12-05 12:24:58.266681] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:21:27.639 [2024-12-05 12:24:58.267988] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:21:27.639 [2024-12-05 12:24:58.268065] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.639 [2024-12-05 12:24:58.413028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:27.639 [2024-12-05 12:24:58.463523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.639 [2024-12-05 12:24:58.463589] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.639 [2024-12-05 12:24:58.463615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.639 [2024-12-05 12:24:58.463631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.639 [2024-12-05 12:24:58.463637] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.639 [2024-12-05 12:24:58.464919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.639 [2024-12-05 12:24:58.465007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.639 [2024-12-05 12:24:58.465012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.898 [2024-12-05 12:24:58.584175] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:21:27.898 [2024-12-05 12:24:58.584433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:21:27.898 [2024-12-05 12:24:58.584853] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:21:27.899 [2024-12-05 12:24:58.585495] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:27.899 [2024-12-05 12:24:58.666654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:27.899 Malloc0 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:27.899 Delay0 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:27.899 [2024-12-05 12:24:58.750612] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.899 12:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:21:28.158 [2024-12-05 12:24:58.935395] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:21:30.694 Initializing NVMe Controllers 00:21:30.694 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:21:30.694 controller IO queue size 128 less than required 00:21:30.694 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:21:30.694 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:21:30.694 Initialization complete. Launching workers. 00:21:30.694 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34666 00:21:30.694 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34723, failed to submit 66 00:21:30.694 success 34666, unsuccessful 57, failed 0 00:21:30.694 12:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:30.694 12:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.694 12:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:30.694 12:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.694 12:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:30.694 12:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:21:30.694 12:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.694 12:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.694 rmmod nvme_tcp 00:21:30.694 rmmod nvme_fabrics 00:21:30.694 rmmod nvme_keyring 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 97578 ']' 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 97578 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 97578 ']' 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 97578 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97578 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.694 killing process with pid 97578 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97578' 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 97578 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 97578 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:30.694 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:21:30.952 00:21:30.952 real 0m4.082s 00:21:30.952 user 0m9.106s 00:21:30.952 sys 0m1.451s 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:21:30.952 ************************************ 00:21:30.952 END TEST nvmf_abort 00:21:30.952 ************************************ 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:21:30.952 ************************************ 00:21:30.952 START TEST nvmf_ns_hotplug_stress 00:21:30.952 ************************************ 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:21:30.952 * Looking for test storage... 00:21:30.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:21:30.952 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.211 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:31.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.212 --rc genhtml_branch_coverage=1 00:21:31.212 --rc genhtml_function_coverage=1 00:21:31.212 --rc genhtml_legend=1 00:21:31.212 --rc geninfo_all_blocks=1 00:21:31.212 --rc geninfo_unexecuted_blocks=1 00:21:31.212 00:21:31.212 ' 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:31.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.212 --rc genhtml_branch_coverage=1 00:21:31.212 --rc genhtml_function_coverage=1 00:21:31.212 --rc genhtml_legend=1 00:21:31.212 --rc geninfo_all_blocks=1 00:21:31.212 --rc geninfo_unexecuted_blocks=1 00:21:31.212 00:21:31.212 ' 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:31.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.212 --rc genhtml_branch_coverage=1 00:21:31.212 --rc genhtml_function_coverage=1 00:21:31.212 --rc genhtml_legend=1 00:21:31.212 --rc geninfo_all_blocks=1 00:21:31.212 --rc geninfo_unexecuted_blocks=1 00:21:31.212 00:21:31.212 ' 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:31.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.212 --rc genhtml_branch_coverage=1 00:21:31.212 --rc genhtml_function_coverage=1 00:21:31.212 --rc genhtml_legend=1 00:21:31.212 --rc geninfo_all_blocks=1 00:21:31.212 --rc geninfo_unexecuted_blocks=1 00:21:31.212 00:21:31.212 ' 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:31.212 Cannot find device "nvmf_init_br" 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:31.212 Cannot find device "nvmf_init_br2" 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:31.212 Cannot find device "nvmf_tgt_br" 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:21:31.212 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:31.212 Cannot find device "nvmf_tgt_br2" 00:21:31.213 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:21:31.213 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:31.213 Cannot find device "nvmf_init_br" 00:21:31.213 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:21:31.213 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:31.213 Cannot find device "nvmf_init_br2" 00:21:31.213 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:21:31.213 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:31.213 Cannot find device "nvmf_tgt_br" 00:21:31.213 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:21:31.213 12:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:31.213 Cannot find device "nvmf_tgt_br2" 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:31.213 Cannot find device "nvmf_br" 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:31.213 Cannot find device "nvmf_init_if" 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:31.213 Cannot find device "nvmf_init_if2" 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.213 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:31.213 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:31.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:31.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:21:31.472 00:21:31.472 --- 10.0.0.3 ping statistics --- 00:21:31.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.472 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:31.472 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:31.472 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:21:31.472 00:21:31.472 --- 10.0.0.4 ping statistics --- 00:21:31.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.472 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:31.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:31.472 00:21:31.472 --- 10.0.0.1 ping statistics --- 00:21:31.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.472 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:31.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:21:31.472 00:21:31.472 --- 10.0.0.2 ping statistics --- 00:21:31.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.472 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.472 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:21:31.735 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=97856 00:21:31.735 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:21:31.735 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 97856 00:21:31.735 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 97856 ']' 00:21:31.735 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.735 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.735 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.735 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.735 12:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:21:31.735 [2024-12-05 12:25:02.401846] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:21:31.735 [2024-12-05 12:25:02.403166] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:21:31.735 [2024-12-05 12:25:02.403241] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.735 [2024-12-05 12:25:02.558219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:32.036 [2024-12-05 12:25:02.625372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.036 [2024-12-05 12:25:02.625453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.036 [2024-12-05 12:25:02.625468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.037 [2024-12-05 12:25:02.625479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.037 [2024-12-05 12:25:02.625491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.037 [2024-12-05 12:25:02.627056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.037 [2024-12-05 12:25:02.627203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:32.037 [2024-12-05 12:25:02.627214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.037 [2024-12-05 12:25:02.768748] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:21:32.037 [2024-12-05 12:25:02.768847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:21:32.037 [2024-12-05 12:25:02.769251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:21:32.037 [2024-12-05 12:25:02.769588] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:21:32.626 12:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.626 12:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:21:32.626 12:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.626 12:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.626 12:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:21:32.626 12:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.626 12:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:21:32.626 12:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:32.884 [2024-12-05 12:25:03.564431] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.884 12:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:33.141 12:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:33.398 [2024-12-05 12:25:04.012863] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:33.398 12:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:33.656 12:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:21:33.915 Malloc0 00:21:33.915 12:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:34.172 Delay0 00:21:34.172 12:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:34.429 12:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:21:34.429 NULL1 00:21:34.686 12:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:21:34.686 12:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:21:34.686 12:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=97987 00:21:34.686 12:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:34.686 12:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:36.057 Read completed with error (sct=0, sc=11) 00:21:36.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:36.057 12:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:36.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:36.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:36.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:36.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:36.057 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:36.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:36.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:36.316 12:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:21:36.316 12:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:21:36.574 true 00:21:36.574 12:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:36.574 12:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:37.508 12:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:37.508 12:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:21:37.508 12:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:21:37.767 true 00:21:37.767 12:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:37.767 12:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:38.024 12:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:38.282 12:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:21:38.282 12:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:21:38.540 true 00:21:38.540 12:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:38.540 12:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:38.797 12:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:39.055 12:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:21:39.055 12:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:21:39.314 true 00:21:39.314 12:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:39.314 12:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:40.250 12:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:40.509 12:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:21:40.509 12:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:21:40.768 true 00:21:40.768 12:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:40.768 12:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:41.027 12:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:41.285 12:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:21:41.285 12:25:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:21:41.544 true 00:21:41.544 12:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:41.544 12:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:42.480 12:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:42.480 12:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:21:42.480 12:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:21:42.737 true 00:21:42.738 12:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:42.738 12:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:42.996 12:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:43.255 12:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:21:43.255 12:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:21:43.514 true 00:21:43.514 12:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:43.514 12:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:44.450 12:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:44.450 12:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:21:44.450 12:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:21:44.707 true 00:21:44.707 12:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:44.707 12:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:44.965 12:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:45.224 12:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:21:45.224 12:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:21:45.483 true 00:21:45.483 12:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:45.483 12:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:46.421 12:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:46.421 12:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:21:46.421 12:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:21:46.680 true 00:21:46.680 12:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:46.680 12:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:46.939 12:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:47.198 12:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:21:47.198 12:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:21:47.457 true 00:21:47.457 12:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:47.457 12:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:48.395 12:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:48.653 12:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:21:48.653 12:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:21:48.913 true 00:21:48.913 12:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:48.913 12:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:49.172 12:25:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:49.431 12:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:21:49.431 12:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:21:49.431 true 00:21:49.431 12:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:49.431 12:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:50.369 12:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:50.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:50.628 12:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:21:50.628 12:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:21:50.886 true 00:21:50.886 12:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:50.886 12:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:51.146 12:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:51.405 12:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:21:51.405 12:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:21:51.405 true 00:21:51.664 12:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:51.664 12:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:52.232 12:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:52.489 12:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:21:52.489 12:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:21:52.747 true 00:21:52.747 12:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:52.747 12:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:53.005 12:25:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:53.264 12:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:21:53.264 12:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:21:53.523 true 00:21:53.523 12:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:53.523 12:25:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:54.466 12:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:54.731 12:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:21:54.731 12:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:21:54.989 true 00:21:54.989 12:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:54.989 12:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:54.989 12:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:55.247 12:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:21:55.247 12:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:21:55.517 true 00:21:55.517 12:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:55.517 12:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:56.467 12:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:56.726 12:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:21:56.726 12:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:21:56.726 true 00:21:56.985 12:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:56.985 12:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:57.244 12:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:57.503 12:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:21:57.504 12:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:21:57.504 true 00:21:57.504 12:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:57.504 12:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:57.763 12:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:58.021 12:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:21:58.021 12:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:21:58.281 true 00:21:58.281 12:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:58.281 12:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:59.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:59.658 12:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:59.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:59.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:59.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:59.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:59.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:59.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:59.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:59.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:59.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:59.658 12:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:21:59.658 12:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:21:59.917 true 00:21:59.917 12:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:21:59.917 12:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:00.851 12:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:00.851 12:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:22:00.851 12:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:22:01.110 true 00:22:01.110 12:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:22:01.110 12:25:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:01.369 12:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:01.628 12:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:22:01.628 12:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:22:01.887 true 00:22:01.887 12:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:22:01.887 12:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:02.824 12:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:03.084 12:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:22:03.084 12:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:22:03.084 true 00:22:03.084 12:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:22:03.084 12:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:03.343 12:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:03.602 12:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:22:03.602 12:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:22:03.861 true 00:22:03.861 12:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:22:03.861 12:25:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:04.809 12:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:05.067 Initializing NVMe Controllers 00:22:05.067 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:05.067 Controller IO queue size 128, less than required. 00:22:05.067 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:05.067 Controller IO queue size 128, less than required. 00:22:05.067 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:05.067 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:05.067 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:05.067 Initialization complete. Launching workers. 00:22:05.067 ======================================================== 00:22:05.067 Latency(us) 00:22:05.067 Device Information : IOPS MiB/s Average min max 00:22:05.067 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 724.28 0.35 96790.14 2775.44 1056973.29 00:22:05.067 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13831.02 6.75 9254.16 2273.77 485693.80 00:22:05.067 ======================================================== 00:22:05.067 Total : 14555.30 7.11 13610.01 2273.77 1056973.29 00:22:05.067 00:22:05.067 12:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:22:05.067 12:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:22:05.067 true 00:22:05.325 12:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97987 00:22:05.325 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (97987) - No such process 00:22:05.325 12:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 97987 00:22:05.326 12:25:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:05.326 12:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:05.583 12:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:22:05.583 12:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:22:05.583 12:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:22:05.583 12:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:05.583 12:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:22:05.841 null0 00:22:05.841 12:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:05.841 12:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:05.841 12:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:22:06.099 null1 00:22:06.099 12:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:06.099 12:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:06.099 12:25:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:22:06.357 null2 00:22:06.357 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:06.357 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:06.357 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:22:06.615 null3 00:22:06.615 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:06.615 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:06.615 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:22:06.874 null4 00:22:06.874 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:06.874 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:06.874 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:22:06.874 null5 00:22:06.874 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:06.874 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:06.874 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:22:07.133 null6 00:22:07.133 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:07.133 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:07.133 12:25:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:22:07.392 null7 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:07.392 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:22:07.393 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:22:07.393 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:22:07.393 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:22:07.393 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:22:07.393 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:22:07.393 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.393 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 98995 98997 98999 99000 99002 99004 99006 99008 00:22:07.393 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:07.652 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:07.652 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:07.652 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:07.652 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:07.652 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:07.652 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:07.652 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:07.652 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:07.912 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:08.171 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.171 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.171 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:08.171 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.171 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.171 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:08.171 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.171 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.171 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:08.171 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:08.171 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:08.171 12:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:08.171 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:08.171 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.430 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:08.689 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.689 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.689 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:08.689 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.689 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.690 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:08.690 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.690 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.690 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:08.690 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:08.690 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:08.690 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:08.690 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:08.949 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:08.949 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:08.949 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:08.949 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:08.949 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.949 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.949 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:08.949 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.950 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.950 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:08.950 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:08.950 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:08.950 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:09.209 12:25:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:09.209 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.209 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.209 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:09.209 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:09.469 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:09.469 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:09.469 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:09.469 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:09.469 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:09.469 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:09.469 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.469 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.469 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:09.469 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.469 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.469 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:09.728 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:09.729 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:09.988 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:09.988 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:09.988 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:09.988 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:09.988 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:09.988 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:09.988 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.988 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.988 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:09.988 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:09.988 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:09.988 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.248 12:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:10.248 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.248 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.248 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:10.248 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:10.507 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:10.507 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:10.507 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:10.507 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:10.507 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:10.507 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:10.507 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:10.766 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:10.767 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:11.026 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:11.284 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:11.284 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.285 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.285 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:11.285 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.285 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.285 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:11.285 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.285 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.285 12:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:11.285 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.285 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.285 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:11.285 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.285 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.285 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:11.285 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:11.544 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.544 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.544 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:11.544 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.544 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.544 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:11.544 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:11.544 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:11.544 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:11.544 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:11.544 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:11.804 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:12.064 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:12.324 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:12.324 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.324 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.324 12:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:22:12.324 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:12.324 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.324 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.324 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:22:12.324 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.324 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.324 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:22:12.324 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.324 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.324 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:22:12.324 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:22:12.583 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:12.842 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:13.101 12:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:13.101 rmmod nvme_tcp 00:22:13.360 rmmod nvme_fabrics 00:22:13.360 rmmod nvme_keyring 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 97856 ']' 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 97856 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 97856 ']' 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 97856 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97856 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:13.360 killing process with pid 97856 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97856' 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 97856 00:22:13.360 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 97856 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:13.619 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:22:13.878 ************************************ 00:22:13.878 END TEST nvmf_ns_hotplug_stress 00:22:13.878 ************************************ 00:22:13.878 00:22:13.878 real 0m42.859s 00:22:13.878 user 3m8.753s 00:22:13.878 sys 0m16.151s 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:22:13.878 ************************************ 00:22:13.878 START TEST nvmf_delete_subsystem 00:22:13.878 ************************************ 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:22:13.878 * Looking for test storage... 00:22:13.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:22:13.878 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:14.138 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:14.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.139 --rc genhtml_branch_coverage=1 00:22:14.139 --rc genhtml_function_coverage=1 00:22:14.139 --rc genhtml_legend=1 00:22:14.139 --rc geninfo_all_blocks=1 00:22:14.139 --rc geninfo_unexecuted_blocks=1 00:22:14.139 00:22:14.139 ' 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:14.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.139 --rc genhtml_branch_coverage=1 00:22:14.139 --rc genhtml_function_coverage=1 00:22:14.139 --rc genhtml_legend=1 00:22:14.139 --rc geninfo_all_blocks=1 00:22:14.139 --rc geninfo_unexecuted_blocks=1 00:22:14.139 00:22:14.139 ' 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:14.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.139 --rc genhtml_branch_coverage=1 00:22:14.139 --rc genhtml_function_coverage=1 00:22:14.139 --rc genhtml_legend=1 00:22:14.139 --rc geninfo_all_blocks=1 00:22:14.139 --rc geninfo_unexecuted_blocks=1 00:22:14.139 00:22:14.139 ' 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:14.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.139 --rc genhtml_branch_coverage=1 00:22:14.139 --rc genhtml_function_coverage=1 00:22:14.139 --rc genhtml_legend=1 00:22:14.139 --rc geninfo_all_blocks=1 00:22:14.139 --rc geninfo_unexecuted_blocks=1 00:22:14.139 00:22:14.139 ' 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:14.139 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:14.140 Cannot find device "nvmf_init_br" 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:14.140 Cannot find device "nvmf_init_br2" 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:14.140 Cannot find device "nvmf_tgt_br" 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:14.140 Cannot find device "nvmf_tgt_br2" 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:14.140 Cannot find device "nvmf_init_br" 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:14.140 Cannot find device "nvmf_init_br2" 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:14.140 Cannot find device "nvmf_tgt_br" 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:14.140 Cannot find device "nvmf_tgt_br2" 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:14.140 Cannot find device "nvmf_br" 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:14.140 Cannot find device "nvmf_init_if" 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:14.140 Cannot find device "nvmf_init_if2" 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:14.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:14.140 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:14.140 12:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:14.140 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:14.405 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:14.405 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:22:14.405 00:22:14.405 --- 10.0.0.3 ping statistics --- 00:22:14.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.405 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:22:14.405 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:14.405 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:14.405 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:22:14.405 00:22:14.406 --- 10.0.0.4 ping statistics --- 00:22:14.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.406 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:22:14.406 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:14.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:22:14.406 00:22:14.406 --- 10.0.0.1 ping statistics --- 00:22:14.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.406 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:14.406 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:14.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:22:14.406 00:22:14.406 --- 10.0.0.2 ping statistics --- 00:22:14.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.406 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:14.406 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.406 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:22:14.406 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:14.406 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.406 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:14.406 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:14.406 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.406 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:14.406 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=100381 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 100381 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 100381 ']' 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.665 12:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:14.665 [2024-12-05 12:25:45.346129] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:14.665 [2024-12-05 12:25:45.347464] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:22:14.665 [2024-12-05 12:25:45.347533] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.665 [2024-12-05 12:25:45.500893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:14.923 [2024-12-05 12:25:45.551887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.923 [2024-12-05 12:25:45.551961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.923 [2024-12-05 12:25:45.551976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.923 [2024-12-05 12:25:45.551986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.923 [2024-12-05 12:25:45.551996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.923 [2024-12-05 12:25:45.553332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.923 [2024-12-05 12:25:45.553339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.923 [2024-12-05 12:25:45.661834] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:14.923 [2024-12-05 12:25:45.662075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:14.923 [2024-12-05 12:25:45.662220] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:22:15.488 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.488 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:22:15.488 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.488 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.488 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 [2024-12-05 12:25:46.374467] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 [2024-12-05 12:25:46.394624] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:15.746 NULL1 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.746 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:15.747 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.747 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 Delay0 00:22:15.747 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.747 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:15.747 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.747 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.747 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=100432 00:22:15.747 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:22:15.747 12:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:22:15.747 [2024-12-05 12:25:46.599899] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:17.649 12:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:17.649 12:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.649 12:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 starting I/O failed: -6 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 starting I/O failed: -6 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 starting I/O failed: -6 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 starting I/O failed: -6 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 starting I/O failed: -6 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 starting I/O failed: -6 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 starting I/O failed: -6 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 starting I/O failed: -6 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 starting I/O failed: -6 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 starting I/O failed: -6 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 [2024-12-05 12:25:48.640543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8387e0 is same with the state(6) to be set 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.909 Write completed with error (sct=0, sc=8) 00:22:17.909 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 [2024-12-05 12:25:48.642854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dfad0 is same with the state(6) to be set 00:22:17.910 [2024-12-05 12:25:48.642895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dfad0 is same with the state(6) to be set 00:22:17.910 [2024-12-05 12:25:48.642906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dfad0 is same with the state(6) to be set 00:22:17.910 [2024-12-05 12:25:48.642914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dfad0 is same with the state(6) to be set 00:22:17.910 [2024-12-05 12:25:48.642922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dfad0 is same with the state(6) to be set 00:22:17.910 [2024-12-05 12:25:48.642930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dfad0 is same with the state(6) to be set 00:22:17.910 [2024-12-05 12:25:48.642937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dfad0 is same with the state(6) to be set 00:22:17.910 [2024-12-05 12:25:48.642945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dfad0 is same with the state(6) to be set 00:22:17.910 [2024-12-05 12:25:48.642953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dfad0 is same with the state(6) to be set 00:22:17.910 [2024-12-05 12:25:48.642962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22dfad0 is same with the state(6) to be set 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 starting I/O failed: -6 00:22:17.910 starting I/O failed: -6 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 starting I/O failed: -6 00:22:17.910 [2024-12-05 12:25:48.646150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2ab000d4d0 is same with the state(6) to be set 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Write completed with error (sct=0, sc=8) 00:22:17.910 Read completed with error (sct=0, sc=8) 00:22:18.847 [2024-12-05 12:25:49.614899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82caa0 is same with the state(6) to be set 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 [2024-12-05 12:25:49.641431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x837a50 is same with the state(6) to be set 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 [2024-12-05 12:25:49.641937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83aea0 is same with the state(6) to be set 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Read completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 Write completed with error (sct=0, sc=8) 00:22:18.847 [2024-12-05 12:25:49.643133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2ab000d020 is same with the state(6) to be set 00:22:18.847 Initializing NVMe Controllers 00:22:18.847 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.847 Controller IO queue size 128, less than required. 00:22:18.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.847 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:22:18.847 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:22:18.847 Initialization complete. Launching workers. 00:22:18.847 ======================================================== 00:22:18.847 Latency(us) 00:22:18.847 Device Information : IOPS MiB/s Average min max 00:22:18.847 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.42 0.08 961252.13 359.28 2003019.66 00:22:18.847 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.34 0.08 904635.16 1310.68 2003245.92 00:22:18.847 ======================================================== 00:22:18.847 Total : 324.75 0.16 932253.19 359.28 2003245.92 00:22:18.847 00:22:18.847 [2024-12-05 12:25:49.644391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82caa0 (9): Bad file descriptor 00:22:18.847 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:18.847 12:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.847 12:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:22:18.847 12:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100432 00:22:18.847 12:25:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100432 00:22:19.415 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (100432) - No such process 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 100432 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 100432 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 100432 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:19.415 [2024-12-05 12:25:50.171205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=100478 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100478 00:22:19.415 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:19.674 [2024-12-05 12:25:50.362785] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:19.932 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:19.932 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100478 00:22:19.932 12:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:20.498 12:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:20.498 12:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100478 00:22:20.498 12:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:21.063 12:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:21.063 12:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100478 00:22:21.063 12:25:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:21.628 12:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:21.628 12:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100478 00:22:21.628 12:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:21.885 12:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:21.885 12:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100478 00:22:21.885 12:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:22.450 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:22.450 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100478 00:22:22.450 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:22.708 Initializing NVMe Controllers 00:22:22.708 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.708 Controller IO queue size 128, less than required. 00:22:22.708 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.708 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:22:22.708 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:22:22.708 Initialization complete. Launching workers. 00:22:22.708 ======================================================== 00:22:22.708 Latency(us) 00:22:22.708 Device Information : IOPS MiB/s Average min max 00:22:22.708 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004936.87 1000174.67 1016435.22 00:22:22.708 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007551.95 1000188.97 1020163.53 00:22:22.708 ======================================================== 00:22:22.708 Total : 256.00 0.12 1006244.41 1000174.67 1020163.53 00:22:22.708 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100478 00:22:22.967 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (100478) - No such process 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 100478 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:22.967 rmmod nvme_tcp 00:22:22.967 rmmod nvme_fabrics 00:22:22.967 rmmod nvme_keyring 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 100381 ']' 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 100381 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 100381 ']' 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 100381 00:22:22.967 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:22:23.226 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.226 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100381 00:22:23.226 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.226 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.226 killing process with pid 100381 00:22:23.226 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100381' 00:22:23.226 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 100381 00:22:23.226 12:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 100381 00:22:23.226 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:23.226 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:23.226 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:23.226 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:22:23.226 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:23.226 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:22:23.226 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:22:23.226 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:23.226 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:23.226 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:23.226 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:22:23.521 00:22:23.521 real 0m9.684s 00:22:23.521 user 0m25.028s 00:22:23.521 sys 0m1.765s 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:22:23.521 ************************************ 00:22:23.521 END TEST nvmf_delete_subsystem 00:22:23.521 ************************************ 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:22:23.521 ************************************ 00:22:23.521 START TEST nvmf_host_management 00:22:23.521 ************************************ 00:22:23.521 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:22:23.877 * Looking for test storage... 00:22:23.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.877 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:23.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.878 --rc genhtml_branch_coverage=1 00:22:23.878 --rc genhtml_function_coverage=1 00:22:23.878 --rc genhtml_legend=1 00:22:23.878 --rc geninfo_all_blocks=1 00:22:23.878 --rc geninfo_unexecuted_blocks=1 00:22:23.878 00:22:23.878 ' 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:23.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.878 --rc genhtml_branch_coverage=1 00:22:23.878 --rc genhtml_function_coverage=1 00:22:23.878 --rc genhtml_legend=1 00:22:23.878 --rc geninfo_all_blocks=1 00:22:23.878 --rc geninfo_unexecuted_blocks=1 00:22:23.878 00:22:23.878 ' 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:23.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.878 --rc genhtml_branch_coverage=1 00:22:23.878 --rc genhtml_function_coverage=1 00:22:23.878 --rc genhtml_legend=1 00:22:23.878 --rc geninfo_all_blocks=1 00:22:23.878 --rc geninfo_unexecuted_blocks=1 00:22:23.878 00:22:23.878 ' 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:23.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.878 --rc genhtml_branch_coverage=1 00:22:23.878 --rc genhtml_function_coverage=1 00:22:23.878 --rc genhtml_legend=1 00:22:23.878 --rc geninfo_all_blocks=1 00:22:23.878 --rc geninfo_unexecuted_blocks=1 00:22:23.878 00:22:23.878 ' 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:23.878 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:23.879 Cannot find device "nvmf_init_br" 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:23.879 Cannot find device "nvmf_init_br2" 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:23.879 Cannot find device "nvmf_tgt_br" 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:23.879 Cannot find device "nvmf_tgt_br2" 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:23.879 Cannot find device "nvmf_init_br" 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:23.879 Cannot find device "nvmf_init_br2" 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:23.879 Cannot find device "nvmf_tgt_br" 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:23.879 Cannot find device "nvmf_tgt_br2" 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:23.879 Cannot find device "nvmf_br" 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:23.879 Cannot find device "nvmf_init_if" 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:23.879 Cannot find device "nvmf_init_if2" 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:23.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:23.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:23.879 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:24.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:24.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:22:24.139 00:22:24.139 --- 10.0.0.3 ping statistics --- 00:22:24.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.139 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:24.139 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:24.139 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:22:24.139 00:22:24.139 --- 10.0.0.4 ping statistics --- 00:22:24.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.139 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:24.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:22:24.139 00:22:24.139 --- 10.0.0.1 ping statistics --- 00:22:24.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.139 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:24.139 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:24.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:22:24.140 00:22:24.140 --- 10.0.0.2 ping statistics --- 00:22:24.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.140 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=100755 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 100755 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 100755 ']' 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.140 12:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:24.399 [2024-12-05 12:25:55.013969] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:24.399 [2024-12-05 12:25:55.015209] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:22:24.399 [2024-12-05 12:25:55.015301] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.399 [2024-12-05 12:25:55.159457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.399 [2024-12-05 12:25:55.210589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.399 [2024-12-05 12:25:55.210657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.399 [2024-12-05 12:25:55.210667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.399 [2024-12-05 12:25:55.210674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.399 [2024-12-05 12:25:55.210681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.399 [2024-12-05 12:25:55.212010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.399 [2024-12-05 12:25:55.212147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.399 [2024-12-05 12:25:55.212336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:24.399 [2024-12-05 12:25:55.212340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.658 [2024-12-05 12:25:55.331239] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:24.658 [2024-12-05 12:25:55.331774] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:22:24.658 [2024-12-05 12:25:55.332265] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:24.658 [2024-12-05 12:25:55.332270] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:22:24.658 [2024-12-05 12:25:55.332589] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:22:25.224 12:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.224 12:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:22:25.224 12:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:25.224 12:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.224 12:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:25.224 12:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.224 12:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.224 12:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.224 12:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:25.224 [2024-12-05 12:25:56.005316] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.224 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.224 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:22:25.224 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.224 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:25.224 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:22:25.224 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:22:25.224 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:22:25.224 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.224 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:25.224 Malloc0 00:22:25.484 [2024-12-05 12:25:56.101422] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=100827 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 100827 /var/tmp/bdevperf.sock 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 100827 ']' 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:25.484 { 00:22:25.484 "params": { 00:22:25.484 "name": "Nvme$subsystem", 00:22:25.484 "trtype": "$TEST_TRANSPORT", 00:22:25.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.484 "adrfam": "ipv4", 00:22:25.484 "trsvcid": "$NVMF_PORT", 00:22:25.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.484 "hdgst": ${hdgst:-false}, 00:22:25.484 "ddgst": ${ddgst:-false} 00:22:25.484 }, 00:22:25.484 "method": "bdev_nvme_attach_controller" 00:22:25.484 } 00:22:25.484 EOF 00:22:25.484 )") 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:22:25.484 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:25.484 "params": { 00:22:25.484 "name": "Nvme0", 00:22:25.484 "trtype": "tcp", 00:22:25.484 "traddr": "10.0.0.3", 00:22:25.484 "adrfam": "ipv4", 00:22:25.484 "trsvcid": "4420", 00:22:25.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:25.484 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:25.484 "hdgst": false, 00:22:25.484 "ddgst": false 00:22:25.484 }, 00:22:25.484 "method": "bdev_nvme_attach_controller" 00:22:25.484 }' 00:22:25.484 [2024-12-05 12:25:56.208538] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:22:25.484 [2024-12-05 12:25:56.208692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100827 ] 00:22:25.744 [2024-12-05 12:25:56.360116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.744 [2024-12-05 12:25:56.413743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.744 Running I/O for 10 seconds... 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:22:26.004 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:22:26.265 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:22:26.265 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:22:26.265 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:22:26.265 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.265 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:26.265 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:22:26.265 12:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.265 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=549 00:22:26.265 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 549 -ge 100 ']' 00:22:26.265 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:22:26.265 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:22:26.265 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:22:26.265 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:22:26.265 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.265 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:26.265 [2024-12-05 12:25:57.029354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.265 [2024-12-05 12:25:57.029404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-12-05 12:25:57.029419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.265 [2024-12-05 12:25:57.029428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-12-05 12:25:57.029438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.265 [2024-12-05 12:25:57.029447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-12-05 12:25:57.029457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.265 [2024-12-05 12:25:57.029465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-12-05 12:25:57.029474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5b660 is same with the state(6) to be set 00:22:26.265 [2024-12-05 12:25:57.029838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-12-05 12:25:57.029855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-12-05 12:25:57.029872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-12-05 12:25:57.029881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-12-05 12:25:57.029892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-12-05 12:25:57.029901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-12-05 12:25:57.029911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-12-05 12:25:57.029919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-12-05 12:25:57.029929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-12-05 12:25:57.029937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-12-05 12:25:57.029947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-12-05 12:25:57.029955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.265 [2024-12-05 12:25:57.029964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.265 [2024-12-05 12:25:57.029972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.029982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.029990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.266 [2024-12-05 12:25:57.030764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.266 [2024-12-05 12:25:57.030772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.030790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.030808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.030826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.030844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.030861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.030879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.030896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.030914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.030932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.030950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.030968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.030987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.030996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.031004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.031014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.031022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.031032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.031040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.031050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.031058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.031068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.031076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.031086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.267 [2024-12-05 12:25:57.031094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.032171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:26.267 task offset: 81920 on job bdev=Nvme0n1 fails 00:22:26.267 00:22:26.267 Latency(us) 00:22:26.267 [2024-12-05T12:25:57.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.267 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.267 Job: Nvme0n1 ended in about 0.43 seconds with error 00:22:26.267 Verification LBA range: start 0x0 length 0x400 00:22:26.267 Nvme0n1 : 0.43 1471.52 91.97 147.15 0.00 38388.86 2204.39 39321.60 00:22:26.267 [2024-12-05T12:25:57.136Z] =================================================================================================================== 00:22:26.267 [2024-12-05T12:25:57.136Z] Total : 1471.52 91.97 147.15 0.00 38388.86 2204.39 39321.60 00:22:26.267 [2024-12-05 12:25:57.033996] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:26.267 [2024-12-05 12:25:57.034017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5b660 (9): Bad file descriptor 00:22:26.267 [2024-12-05 12:25:57.035061] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:22:26.267 [2024-12-05 12:25:57.035158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:26.267 [2024-12-05 12:25:57.035181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.267 [2024-12-05 12:25:57.035195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:22:26.267 [2024-12-05 12:25:57.035203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:22:26.267 [2024-12-05 12:25:57.035212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:26.267 [2024-12-05 12:25:57.035220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc5b660 00:22:26.267 [2024-12-05 12:25:57.035254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5b660 (9): Bad file descriptor 00:22:26.267 [2024-12-05 12:25:57.035271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:26.267 [2024-12-05 12:25:57.035331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:26.267 [2024-12-05 12:25:57.035344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:26.267 [2024-12-05 12:25:57.035354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:26.267 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.267 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:22:26.267 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.267 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:26.267 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.267 12:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:22:27.204 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 100827 00:22:27.204 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (100827) - No such process 00:22:27.204 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:22:27.204 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:22:27.204 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:27.204 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:22:27.204 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:22:27.204 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:22:27.204 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:27.204 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:27.204 { 00:22:27.204 "params": { 00:22:27.204 "name": "Nvme$subsystem", 00:22:27.204 "trtype": "$TEST_TRANSPORT", 00:22:27.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:27.204 "adrfam": "ipv4", 00:22:27.204 "trsvcid": "$NVMF_PORT", 00:22:27.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:27.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:27.204 "hdgst": ${hdgst:-false}, 00:22:27.204 "ddgst": ${ddgst:-false} 00:22:27.204 }, 00:22:27.204 "method": "bdev_nvme_attach_controller" 00:22:27.204 } 00:22:27.204 EOF 00:22:27.204 )") 00:22:27.204 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:22:27.204 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:22:27.463 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:22:27.463 12:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:27.463 "params": { 00:22:27.463 "name": "Nvme0", 00:22:27.463 "trtype": "tcp", 00:22:27.463 "traddr": "10.0.0.3", 00:22:27.463 "adrfam": "ipv4", 00:22:27.463 "trsvcid": "4420", 00:22:27.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:27.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:27.463 "hdgst": false, 00:22:27.463 "ddgst": false 00:22:27.463 }, 00:22:27.463 "method": "bdev_nvme_attach_controller" 00:22:27.463 }' 00:22:27.463 [2024-12-05 12:25:58.121845] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:22:27.463 [2024-12-05 12:25:58.121941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100869 ] 00:22:27.463 [2024-12-05 12:25:58.326567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.722 [2024-12-05 12:25:58.389912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.722 Running I/O for 1 seconds... 00:22:29.114 1697.00 IOPS, 106.06 MiB/s 00:22:29.114 Latency(us) 00:22:29.114 [2024-12-05T12:25:59.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.114 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:29.114 Verification LBA range: start 0x0 length 0x400 00:22:29.114 Nvme0n1 : 1.04 1730.19 108.14 0.00 0.00 36324.51 5362.04 34078.72 00:22:29.114 [2024-12-05T12:25:59.983Z] =================================================================================================================== 00:22:29.114 [2024-12-05T12:25:59.983Z] Total : 1730.19 108.14 0.00 0.00 36324.51 5362.04 34078.72 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.114 rmmod nvme_tcp 00:22:29.114 rmmod nvme_fabrics 00:22:29.114 rmmod nvme_keyring 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 100755 ']' 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 100755 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 100755 ']' 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 100755 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100755 00:22:29.114 killing process with pid 100755 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100755' 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 100755 00:22:29.114 12:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 100755 00:22:29.374 [2024-12-05 12:26:00.202742] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:22:29.374 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:29.374 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:29.374 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:29.374 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:22:29.374 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:22:29.374 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:22:29.374 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:29.374 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:29.633 00:22:29.633 real 0m6.116s 00:22:29.633 user 0m17.914s 00:22:29.633 sys 0m2.165s 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.633 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:22:29.633 ************************************ 00:22:29.633 END TEST nvmf_host_management 00:22:29.633 ************************************ 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:22:29.895 ************************************ 00:22:29.895 START TEST nvmf_lvol 00:22:29.895 ************************************ 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:22:29.895 * Looking for test storage... 00:22:29.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:29.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.895 --rc genhtml_branch_coverage=1 00:22:29.895 --rc genhtml_function_coverage=1 00:22:29.895 --rc genhtml_legend=1 00:22:29.895 --rc geninfo_all_blocks=1 00:22:29.895 --rc geninfo_unexecuted_blocks=1 00:22:29.895 00:22:29.895 ' 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:29.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.895 --rc genhtml_branch_coverage=1 00:22:29.895 --rc genhtml_function_coverage=1 00:22:29.895 --rc genhtml_legend=1 00:22:29.895 --rc geninfo_all_blocks=1 00:22:29.895 --rc geninfo_unexecuted_blocks=1 00:22:29.895 00:22:29.895 ' 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:29.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.895 --rc genhtml_branch_coverage=1 00:22:29.895 --rc genhtml_function_coverage=1 00:22:29.895 --rc genhtml_legend=1 00:22:29.895 --rc geninfo_all_blocks=1 00:22:29.895 --rc geninfo_unexecuted_blocks=1 00:22:29.895 00:22:29.895 ' 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:29.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.895 --rc genhtml_branch_coverage=1 00:22:29.895 --rc genhtml_function_coverage=1 00:22:29.895 --rc genhtml_legend=1 00:22:29.895 --rc geninfo_all_blocks=1 00:22:29.895 --rc geninfo_unexecuted_blocks=1 00:22:29.895 00:22:29.895 ' 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.895 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:29.896 Cannot find device "nvmf_init_br" 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:22:29.896 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:30.156 Cannot find device "nvmf_init_br2" 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:30.156 Cannot find device "nvmf_tgt_br" 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:30.156 Cannot find device "nvmf_tgt_br2" 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:30.156 Cannot find device "nvmf_init_br" 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:30.156 Cannot find device "nvmf_init_br2" 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:30.156 Cannot find device "nvmf_tgt_br" 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:30.156 Cannot find device "nvmf_tgt_br2" 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:30.156 Cannot find device "nvmf_br" 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:30.156 Cannot find device "nvmf_init_if" 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:30.156 Cannot find device "nvmf_init_if2" 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:30.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:30.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:30.156 12:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:30.156 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:30.156 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:30.156 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:30.415 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:30.415 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:30.415 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:30.415 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:30.416 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:30.416 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:22:30.416 00:22:30.416 --- 10.0.0.3 ping statistics --- 00:22:30.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.416 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:30.416 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:30.416 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:22:30.416 00:22:30.416 --- 10.0.0.4 ping statistics --- 00:22:30.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.416 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:30.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:22:30.416 00:22:30.416 --- 10.0.0.1 ping statistics --- 00:22:30.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.416 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:30.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:30.416 00:22:30.416 --- 10.0.0.2 ping statistics --- 00:22:30.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.416 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=101142 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 101142 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 101142 ']' 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.416 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:22:30.416 [2024-12-05 12:26:01.208570] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:30.416 [2024-12-05 12:26:01.209836] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:22:30.416 [2024-12-05 12:26:01.210499] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.675 [2024-12-05 12:26:01.366992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:30.675 [2024-12-05 12:26:01.423245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.675 [2024-12-05 12:26:01.423350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.675 [2024-12-05 12:26:01.423367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.675 [2024-12-05 12:26:01.423378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.675 [2024-12-05 12:26:01.423388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.675 [2024-12-05 12:26:01.424754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.675 [2024-12-05 12:26:01.424894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.675 [2024-12-05 12:26:01.424895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.675 [2024-12-05 12:26:01.537722] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:30.675 [2024-12-05 12:26:01.537732] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:30.675 [2024-12-05 12:26:01.537953] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:22:30.675 [2024-12-05 12:26:01.538430] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:22:30.949 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.949 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:22:30.949 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:30.949 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.949 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:22:30.949 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.949 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:31.207 [2024-12-05 12:26:01.881787] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.207 12:26:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:31.465 12:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:22:31.465 12:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:32.032 12:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:22:32.032 12:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:22:32.032 12:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:22:32.291 12:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9b30dcbc-1f79-402b-80ff-6af3b113f160 00:22:32.550 12:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9b30dcbc-1f79-402b-80ff-6af3b113f160 lvol 20 00:22:32.550 12:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c23083a2-20a7-42b2-9685-86da2b5cff29 00:22:32.550 12:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:22:32.861 12:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c23083a2-20a7-42b2-9685-86da2b5cff29 00:22:33.120 12:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:33.377 [2024-12-05 12:26:04.081880] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:33.377 12:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:22:33.636 12:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=101276 00:22:33.636 12:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:22:33.636 12:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:22:34.570 12:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot c23083a2-20a7-42b2-9685-86da2b5cff29 MY_SNAPSHOT 00:22:35.135 12:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5c74f881-04c5-42ac-adf5-95fdb2676143 00:22:35.135 12:26:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize c23083a2-20a7-42b2-9685-86da2b5cff29 30 00:22:35.393 12:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5c74f881-04c5-42ac-adf5-95fdb2676143 MY_CLONE 00:22:35.651 12:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=461400bd-70bf-4454-9d7e-26a43b9fae57 00:22:35.651 12:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 461400bd-70bf-4454-9d7e-26a43b9fae57 00:22:36.216 12:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 101276 00:22:44.332 Initializing NVMe Controllers 00:22:44.332 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:22:44.332 Controller IO queue size 128, less than required. 00:22:44.332 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.332 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:22:44.332 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:22:44.332 Initialization complete. Launching workers. 00:22:44.332 ======================================================== 00:22:44.332 Latency(us) 00:22:44.332 Device Information : IOPS MiB/s Average min max 00:22:44.332 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11451.70 44.73 11178.13 4593.56 92556.14 00:22:44.332 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11938.60 46.64 10726.47 2436.64 88067.66 00:22:44.332 ======================================================== 00:22:44.332 Total : 23390.30 91.37 10947.60 2436.64 92556.14 00:22:44.332 00:22:44.332 12:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:44.332 12:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c23083a2-20a7-42b2-9685-86da2b5cff29 00:22:44.592 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b30dcbc-1f79-402b-80ff-6af3b113f160 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.850 rmmod nvme_tcp 00:22:44.850 rmmod nvme_fabrics 00:22:44.850 rmmod nvme_keyring 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 101142 ']' 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 101142 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 101142 ']' 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 101142 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101142 00:22:44.850 killing process with pid 101142 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101142' 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 101142 00:22:44.850 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 101142 00:22:45.107 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:45.107 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:45.107 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:45.107 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:22:45.107 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:22:45.107 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:45.107 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:22:45.107 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:45.107 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:45.107 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:45.365 12:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:22:45.365 00:22:45.365 real 0m15.687s 00:22:45.365 user 0m55.501s 00:22:45.365 sys 0m5.933s 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.365 ************************************ 00:22:45.365 END TEST nvmf_lvol 00:22:45.365 ************************************ 00:22:45.365 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:22:45.624 ************************************ 00:22:45.624 START TEST nvmf_lvs_grow 00:22:45.624 ************************************ 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:22:45.624 * Looking for test storage... 00:22:45.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:45.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.624 --rc genhtml_branch_coverage=1 00:22:45.624 --rc genhtml_function_coverage=1 00:22:45.624 --rc genhtml_legend=1 00:22:45.624 --rc geninfo_all_blocks=1 00:22:45.624 --rc geninfo_unexecuted_blocks=1 00:22:45.624 00:22:45.624 ' 00:22:45.624 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:45.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.624 --rc genhtml_branch_coverage=1 00:22:45.624 --rc genhtml_function_coverage=1 00:22:45.624 --rc genhtml_legend=1 00:22:45.624 --rc geninfo_all_blocks=1 00:22:45.624 --rc geninfo_unexecuted_blocks=1 00:22:45.624 00:22:45.625 ' 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:45.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.625 --rc genhtml_branch_coverage=1 00:22:45.625 --rc genhtml_function_coverage=1 00:22:45.625 --rc genhtml_legend=1 00:22:45.625 --rc geninfo_all_blocks=1 00:22:45.625 --rc geninfo_unexecuted_blocks=1 00:22:45.625 00:22:45.625 ' 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:45.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.625 --rc genhtml_branch_coverage=1 00:22:45.625 --rc genhtml_function_coverage=1 00:22:45.625 --rc genhtml_legend=1 00:22:45.625 --rc geninfo_all_blocks=1 00:22:45.625 --rc geninfo_unexecuted_blocks=1 00:22:45.625 00:22:45.625 ' 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:45.625 Cannot find device "nvmf_init_br" 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:22:45.625 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:45.884 Cannot find device "nvmf_init_br2" 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:45.884 Cannot find device "nvmf_tgt_br" 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:45.884 Cannot find device "nvmf_tgt_br2" 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:45.884 Cannot find device "nvmf_init_br" 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:45.884 Cannot find device "nvmf_init_br2" 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:45.884 Cannot find device "nvmf_tgt_br" 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:45.884 Cannot find device "nvmf_tgt_br2" 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:45.884 Cannot find device "nvmf_br" 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:45.884 Cannot find device "nvmf_init_if" 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:45.884 Cannot find device "nvmf_init_if2" 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:45.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:45.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:45.884 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:46.143 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:46.143 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:22:46.143 00:22:46.143 --- 10.0.0.3 ping statistics --- 00:22:46.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.143 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:46.143 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:46.143 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:22:46.143 00:22:46.143 --- 10.0.0.4 ping statistics --- 00:22:46.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.143 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:46.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:22:46.143 00:22:46.143 --- 10.0.0.1 ping statistics --- 00:22:46.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.143 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:46.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:22:46.143 00:22:46.143 --- 10.0.0.2 ping statistics --- 00:22:46.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.143 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=101692 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 101692 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 101692 ']' 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.143 12:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:22:46.143 [2024-12-05 12:26:16.897110] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:22:46.143 [2024-12-05 12:26:16.898000] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:22:46.143 [2024-12-05 12:26:16.898044] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.400 [2024-12-05 12:26:17.036811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.400 [2024-12-05 12:26:17.083450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.400 [2024-12-05 12:26:17.083506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.400 [2024-12-05 12:26:17.083516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.400 [2024-12-05 12:26:17.083523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.400 [2024-12-05 12:26:17.083530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.400 [2024-12-05 12:26:17.083886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.401 [2024-12-05 12:26:17.200653] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:22:46.401 [2024-12-05 12:26:17.200945] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:22:46.401 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.401 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:22:46.401 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:46.401 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.401 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:22:46.401 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.401 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:46.966 [2024-12-05 12:26:17.556810] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:22:46.966 ************************************ 00:22:46.966 START TEST lvs_grow_clean 00:22:46.966 ************************************ 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:46.966 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:47.224 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:22:47.224 12:26:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:22:47.480 12:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b3125175-3e1c-4ab4-b0c5-1abd802e1116 00:22:47.481 12:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:22:47.481 12:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3125175-3e1c-4ab4-b0c5-1abd802e1116 00:22:47.738 12:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:22:47.738 12:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:22:47.738 12:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b3125175-3e1c-4ab4-b0c5-1abd802e1116 lvol 150 00:22:47.995 12:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=259b2599-2cb1-44f6-b23c-bae91878804f 00:22:47.995 12:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:47.995 12:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:22:48.252 [2024-12-05 12:26:18.968546] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:22:48.252 [2024-12-05 12:26:18.968679] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:22:48.252 true 00:22:48.252 12:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:22:48.252 12:26:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3125175-3e1c-4ab4-b0c5-1abd802e1116 00:22:48.509 12:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:22:48.509 12:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:22:48.767 12:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 259b2599-2cb1-44f6-b23c-bae91878804f 00:22:49.024 12:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:49.282 [2024-12-05 12:26:19.896977] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:49.282 12:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:22:49.282 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=101839 00:22:49.282 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:22:49.282 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.282 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 101839 /var/tmp/bdevperf.sock 00:22:49.282 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 101839 ']' 00:22:49.282 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.282 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.282 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.282 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.282 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:22:49.539 [2024-12-05 12:26:20.196626] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:22:49.539 [2024-12-05 12:26:20.196723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101839 ] 00:22:49.539 [2024-12-05 12:26:20.345035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.797 [2024-12-05 12:26:20.407762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.797 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.797 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:22:49.797 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:22:50.056 Nvme0n1 00:22:50.056 12:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:22:50.314 [ 00:22:50.314 { 00:22:50.314 "aliases": [ 00:22:50.314 "259b2599-2cb1-44f6-b23c-bae91878804f" 00:22:50.314 ], 00:22:50.314 "assigned_rate_limits": { 00:22:50.314 "r_mbytes_per_sec": 0, 00:22:50.314 "rw_ios_per_sec": 0, 00:22:50.314 "rw_mbytes_per_sec": 0, 00:22:50.314 "w_mbytes_per_sec": 0 00:22:50.314 }, 00:22:50.314 "block_size": 4096, 00:22:50.314 "claimed": false, 00:22:50.314 "driver_specific": { 00:22:50.314 "mp_policy": "active_passive", 00:22:50.314 "nvme": [ 00:22:50.314 { 00:22:50.314 "ctrlr_data": { 00:22:50.314 "ana_reporting": false, 00:22:50.314 "cntlid": 1, 00:22:50.314 "firmware_revision": "25.01", 00:22:50.314 "model_number": "SPDK bdev Controller", 00:22:50.314 "multi_ctrlr": true, 00:22:50.314 "oacs": { 00:22:50.314 "firmware": 0, 00:22:50.314 "format": 0, 00:22:50.314 "ns_manage": 0, 00:22:50.314 "security": 0 00:22:50.314 }, 00:22:50.314 "serial_number": "SPDK0", 00:22:50.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:50.314 "vendor_id": "0x8086" 00:22:50.314 }, 00:22:50.314 "ns_data": { 00:22:50.314 "can_share": true, 00:22:50.314 "id": 1 00:22:50.314 }, 00:22:50.314 "trid": { 00:22:50.314 "adrfam": "IPv4", 00:22:50.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:50.314 "traddr": "10.0.0.3", 00:22:50.314 "trsvcid": "4420", 00:22:50.314 "trtype": "TCP" 00:22:50.314 }, 00:22:50.314 "vs": { 00:22:50.314 "nvme_version": "1.3" 00:22:50.314 } 00:22:50.314 } 00:22:50.314 ] 00:22:50.314 }, 00:22:50.314 "memory_domains": [ 00:22:50.314 { 00:22:50.314 "dma_device_id": "system", 00:22:50.314 "dma_device_type": 1 00:22:50.314 } 00:22:50.314 ], 00:22:50.314 "name": "Nvme0n1", 00:22:50.314 "num_blocks": 38912, 00:22:50.314 "numa_id": -1, 00:22:50.314 "product_name": "NVMe disk", 00:22:50.314 "supported_io_types": { 00:22:50.314 "abort": true, 00:22:50.314 "compare": true, 00:22:50.314 "compare_and_write": true, 00:22:50.314 "copy": true, 00:22:50.314 "flush": true, 00:22:50.314 "get_zone_info": false, 00:22:50.314 "nvme_admin": true, 00:22:50.314 "nvme_io": true, 00:22:50.314 "nvme_io_md": false, 00:22:50.314 "nvme_iov_md": false, 00:22:50.314 "read": true, 00:22:50.314 "reset": true, 00:22:50.314 "seek_data": false, 00:22:50.314 "seek_hole": false, 00:22:50.314 "unmap": true, 00:22:50.314 "write": true, 00:22:50.314 "write_zeroes": true, 00:22:50.314 "zcopy": false, 00:22:50.314 "zone_append": false, 00:22:50.314 "zone_management": false 00:22:50.314 }, 00:22:50.314 "uuid": "259b2599-2cb1-44f6-b23c-bae91878804f", 00:22:50.314 "zoned": false 00:22:50.314 } 00:22:50.314 ] 00:22:50.314 12:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:50.314 12:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=101873 00:22:50.314 12:26:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:22:50.572 Running I/O for 10 seconds... 00:22:51.506 Latency(us) 00:22:51.506 [2024-12-05T12:26:22.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:51.506 Nvme0n1 : 1.00 6174.00 24.12 0.00 0.00 0.00 0.00 0.00 00:22:51.506 [2024-12-05T12:26:22.375Z] =================================================================================================================== 00:22:51.506 [2024-12-05T12:26:22.375Z] Total : 6174.00 24.12 0.00 0.00 0.00 0.00 0.00 00:22:51.506 00:22:52.440 12:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b3125175-3e1c-4ab4-b0c5-1abd802e1116 00:22:52.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:52.440 Nvme0n1 : 2.00 6547.50 25.58 0.00 0.00 0.00 0.00 0.00 00:22:52.440 [2024-12-05T12:26:23.309Z] =================================================================================================================== 00:22:52.440 [2024-12-05T12:26:23.309Z] Total : 6547.50 25.58 0.00 0.00 0.00 0.00 0.00 00:22:52.440 00:22:52.698 true 00:22:52.698 12:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3125175-3e1c-4ab4-b0c5-1abd802e1116 00:22:52.698 12:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:22:53.265 12:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:22:53.265 12:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:22:53.265 12:26:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 101873 00:22:53.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:53.524 Nvme0n1 : 3.00 6629.67 25.90 0.00 0.00 0.00 0.00 0.00 00:22:53.524 [2024-12-05T12:26:24.393Z] =================================================================================================================== 00:22:53.524 [2024-12-05T12:26:24.393Z] Total : 6629.67 25.90 0.00 0.00 0.00 0.00 0.00 00:22:53.524 00:22:54.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:54.456 Nvme0n1 : 4.00 6717.75 26.24 0.00 0.00 0.00 0.00 0.00 00:22:54.456 [2024-12-05T12:26:25.325Z] =================================================================================================================== 00:22:54.456 [2024-12-05T12:26:25.325Z] Total : 6717.75 26.24 0.00 0.00 0.00 0.00 0.00 00:22:54.456 00:22:55.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:55.446 Nvme0n1 : 5.00 6744.80 26.35 0.00 0.00 0.00 0.00 0.00 00:22:55.446 [2024-12-05T12:26:26.315Z] =================================================================================================================== 00:22:55.446 [2024-12-05T12:26:26.315Z] Total : 6744.80 26.35 0.00 0.00 0.00 0.00 0.00 00:22:55.446 00:22:56.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:56.818 Nvme0n1 : 6.00 6805.67 26.58 0.00 0.00 0.00 0.00 0.00 00:22:56.818 [2024-12-05T12:26:27.687Z] =================================================================================================================== 00:22:56.818 [2024-12-05T12:26:27.687Z] Total : 6805.67 26.58 0.00 0.00 0.00 0.00 0.00 00:22:56.818 00:22:57.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:57.751 Nvme0n1 : 7.00 6840.43 26.72 0.00 0.00 0.00 0.00 0.00 00:22:57.751 [2024-12-05T12:26:28.620Z] =================================================================================================================== 00:22:57.751 [2024-12-05T12:26:28.620Z] Total : 6840.43 26.72 0.00 0.00 0.00 0.00 0.00 00:22:57.751 00:22:58.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:58.684 Nvme0n1 : 8.00 6866.25 26.82 0.00 0.00 0.00 0.00 0.00 00:22:58.684 [2024-12-05T12:26:29.553Z] =================================================================================================================== 00:22:58.684 [2024-12-05T12:26:29.553Z] Total : 6866.25 26.82 0.00 0.00 0.00 0.00 0.00 00:22:58.684 00:22:59.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:59.617 Nvme0n1 : 9.00 6876.11 26.86 0.00 0.00 0.00 0.00 0.00 00:22:59.617 [2024-12-05T12:26:30.486Z] =================================================================================================================== 00:22:59.617 [2024-12-05T12:26:30.486Z] Total : 6876.11 26.86 0.00 0.00 0.00 0.00 0.00 00:22:59.617 00:23:00.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:00.550 Nvme0n1 : 10.00 6881.90 26.88 0.00 0.00 0.00 0.00 0.00 00:23:00.550 [2024-12-05T12:26:31.419Z] =================================================================================================================== 00:23:00.550 [2024-12-05T12:26:31.419Z] Total : 6881.90 26.88 0.00 0.00 0.00 0.00 0.00 00:23:00.550 00:23:00.550 00:23:00.550 Latency(us) 00:23:00.550 [2024-12-05T12:26:31.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:00.550 Nvme0n1 : 10.01 6889.36 26.91 0.00 0.00 18574.94 8996.31 38844.97 00:23:00.550 [2024-12-05T12:26:31.419Z] =================================================================================================================== 00:23:00.550 [2024-12-05T12:26:31.419Z] Total : 6889.36 26.91 0.00 0.00 18574.94 8996.31 38844.97 00:23:00.550 { 00:23:00.550 "results": [ 00:23:00.550 { 00:23:00.550 "job": "Nvme0n1", 00:23:00.550 "core_mask": "0x2", 00:23:00.550 "workload": "randwrite", 00:23:00.550 "status": "finished", 00:23:00.550 "queue_depth": 128, 00:23:00.550 "io_size": 4096, 00:23:00.550 "runtime": 10.007756, 00:23:00.550 "iops": 6889.356615009398, 00:23:00.550 "mibps": 26.911549277380463, 00:23:00.550 "io_failed": 0, 00:23:00.550 "io_timeout": 0, 00:23:00.550 "avg_latency_us": 18574.944927447563, 00:23:00.550 "min_latency_us": 8996.305454545454, 00:23:00.550 "max_latency_us": 38844.97454545455 00:23:00.550 } 00:23:00.550 ], 00:23:00.550 "core_count": 1 00:23:00.550 } 00:23:00.550 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 101839 00:23:00.550 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 101839 ']' 00:23:00.550 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 101839 00:23:00.550 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:23:00.550 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.550 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101839 00:23:00.550 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:00.550 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:00.550 killing process with pid 101839 00:23:00.550 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101839' 00:23:00.550 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 101839 00:23:00.550 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.550 00:23:00.550 Latency(us) 00:23:00.550 [2024-12-05T12:26:31.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.550 [2024-12-05T12:26:31.419Z] =================================================================================================================== 00:23:00.550 [2024-12-05T12:26:31.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.550 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 101839 00:23:00.844 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:01.101 12:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:01.358 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3125175-3e1c-4ab4-b0c5-1abd802e1116 00:23:01.358 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:23:01.616 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:23:01.616 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:23:01.616 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:01.874 [2024-12-05 12:26:32.572630] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:23:01.874 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3125175-3e1c-4ab4-b0c5-1abd802e1116 00:23:01.874 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:23:01.874 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3125175-3e1c-4ab4-b0c5-1abd802e1116 00:23:01.874 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:01.874 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.874 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:01.874 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.874 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:01.874 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.874 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:01.874 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:01.874 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3125175-3e1c-4ab4-b0c5-1abd802e1116 00:23:02.132 2024/12/05 12:26:32 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:b3125175-3e1c-4ab4-b0c5-1abd802e1116], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:23:02.132 request: 00:23:02.132 { 00:23:02.132 "method": "bdev_lvol_get_lvstores", 00:23:02.132 "params": { 00:23:02.132 "uuid": "b3125175-3e1c-4ab4-b0c5-1abd802e1116" 00:23:02.132 } 00:23:02.132 } 00:23:02.132 Got JSON-RPC error response 00:23:02.132 GoRPCClient: error on JSON-RPC call 00:23:02.132 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:23:02.132 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.132 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:02.132 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.132 12:26:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:02.390 aio_bdev 00:23:02.390 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 259b2599-2cb1-44f6-b23c-bae91878804f 00:23:02.390 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=259b2599-2cb1-44f6-b23c-bae91878804f 00:23:02.390 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:02.390 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:23:02.390 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:02.390 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:02.390 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:02.648 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 259b2599-2cb1-44f6-b23c-bae91878804f -t 2000 00:23:02.905 [ 00:23:02.905 { 00:23:02.905 "aliases": [ 00:23:02.905 "lvs/lvol" 00:23:02.905 ], 00:23:02.905 "assigned_rate_limits": { 00:23:02.905 "r_mbytes_per_sec": 0, 00:23:02.905 "rw_ios_per_sec": 0, 00:23:02.906 "rw_mbytes_per_sec": 0, 00:23:02.906 "w_mbytes_per_sec": 0 00:23:02.906 }, 00:23:02.906 "block_size": 4096, 00:23:02.906 "claimed": false, 00:23:02.906 "driver_specific": { 00:23:02.906 "lvol": { 00:23:02.906 "base_bdev": "aio_bdev", 00:23:02.906 "clone": false, 00:23:02.906 "esnap_clone": false, 00:23:02.906 "lvol_store_uuid": "b3125175-3e1c-4ab4-b0c5-1abd802e1116", 00:23:02.906 "num_allocated_clusters": 38, 00:23:02.906 "snapshot": false, 00:23:02.906 "thin_provision": false 00:23:02.906 } 00:23:02.906 }, 00:23:02.906 "name": "259b2599-2cb1-44f6-b23c-bae91878804f", 00:23:02.906 "num_blocks": 38912, 00:23:02.906 "product_name": "Logical Volume", 00:23:02.906 "supported_io_types": { 00:23:02.906 "abort": false, 00:23:02.906 "compare": false, 00:23:02.906 "compare_and_write": false, 00:23:02.906 "copy": false, 00:23:02.906 "flush": false, 00:23:02.906 "get_zone_info": false, 00:23:02.906 "nvme_admin": false, 00:23:02.906 "nvme_io": false, 00:23:02.906 "nvme_io_md": false, 00:23:02.906 "nvme_iov_md": false, 00:23:02.906 "read": true, 00:23:02.906 "reset": true, 00:23:02.906 "seek_data": true, 00:23:02.906 "seek_hole": true, 00:23:02.906 "unmap": true, 00:23:02.906 "write": true, 00:23:02.906 "write_zeroes": true, 00:23:02.906 "zcopy": false, 00:23:02.906 "zone_append": false, 00:23:02.906 "zone_management": false 00:23:02.906 }, 00:23:02.906 "uuid": "259b2599-2cb1-44f6-b23c-bae91878804f", 00:23:02.906 "zoned": false 00:23:02.906 } 00:23:02.906 ] 00:23:02.906 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:23:02.906 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3125175-3e1c-4ab4-b0c5-1abd802e1116 00:23:02.906 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:23:03.164 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:23:03.164 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:23:03.164 12:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3125175-3e1c-4ab4-b0c5-1abd802e1116 00:23:03.422 12:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:23:03.422 12:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 259b2599-2cb1-44f6-b23c-bae91878804f 00:23:03.681 12:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3125175-3e1c-4ab4-b0c5-1abd802e1116 00:23:03.939 12:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:04.197 12:26:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:04.455 ************************************ 00:23:04.455 END TEST lvs_grow_clean 00:23:04.455 ************************************ 00:23:04.455 00:23:04.455 real 0m17.667s 00:23:04.455 user 0m16.870s 00:23:04.455 sys 0m2.200s 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:04.455 ************************************ 00:23:04.455 START TEST lvs_grow_dirty 00:23:04.455 ************************************ 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:04.455 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:04.713 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:23:04.713 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:23:04.971 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:04.971 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:04.971 12:26:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:23:05.229 12:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:23:05.229 12:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:23:05.229 12:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 lvol 150 00:23:05.796 12:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2ae612cb-cd9b-48cc-94e7-ffc4eb7db307 00:23:05.796 12:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:05.796 12:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:23:05.796 [2024-12-05 12:26:36.552524] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:23:05.796 [2024-12-05 12:26:36.552694] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:23:05.796 true 00:23:05.796 12:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:05.796 12:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:23:06.054 12:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:23:06.054 12:26:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:06.312 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2ae612cb-cd9b-48cc-94e7-ffc4eb7db307 00:23:06.569 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:06.827 [2024-12-05 12:26:37.500940] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:06.827 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:07.085 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102255 00:23:07.085 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:23:07.085 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.085 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102255 /var/tmp/bdevperf.sock 00:23:07.085 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102255 ']' 00:23:07.085 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.085 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.085 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.085 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.085 12:26:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:07.085 [2024-12-05 12:26:37.841065] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:23:07.085 [2024-12-05 12:26:37.841170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102255 ] 00:23:07.344 [2024-12-05 12:26:37.988559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.344 [2024-12-05 12:26:38.049411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.909 12:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.909 12:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:23:07.909 12:26:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:23:08.167 Nvme0n1 00:23:08.167 12:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:23:08.425 [ 00:23:08.425 { 00:23:08.425 "aliases": [ 00:23:08.425 "2ae612cb-cd9b-48cc-94e7-ffc4eb7db307" 00:23:08.425 ], 00:23:08.425 "assigned_rate_limits": { 00:23:08.425 "r_mbytes_per_sec": 0, 00:23:08.425 "rw_ios_per_sec": 0, 00:23:08.425 "rw_mbytes_per_sec": 0, 00:23:08.425 "w_mbytes_per_sec": 0 00:23:08.425 }, 00:23:08.425 "block_size": 4096, 00:23:08.425 "claimed": false, 00:23:08.425 "driver_specific": { 00:23:08.425 "mp_policy": "active_passive", 00:23:08.425 "nvme": [ 00:23:08.425 { 00:23:08.425 "ctrlr_data": { 00:23:08.425 "ana_reporting": false, 00:23:08.425 "cntlid": 1, 00:23:08.425 "firmware_revision": "25.01", 00:23:08.425 "model_number": "SPDK bdev Controller", 00:23:08.425 "multi_ctrlr": true, 00:23:08.425 "oacs": { 00:23:08.425 "firmware": 0, 00:23:08.425 "format": 0, 00:23:08.425 "ns_manage": 0, 00:23:08.425 "security": 0 00:23:08.425 }, 00:23:08.425 "serial_number": "SPDK0", 00:23:08.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:08.425 "vendor_id": "0x8086" 00:23:08.425 }, 00:23:08.425 "ns_data": { 00:23:08.425 "can_share": true, 00:23:08.425 "id": 1 00:23:08.425 }, 00:23:08.425 "trid": { 00:23:08.425 "adrfam": "IPv4", 00:23:08.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:08.425 "traddr": "10.0.0.3", 00:23:08.425 "trsvcid": "4420", 00:23:08.425 "trtype": "TCP" 00:23:08.425 }, 00:23:08.425 "vs": { 00:23:08.425 "nvme_version": "1.3" 00:23:08.425 } 00:23:08.425 } 00:23:08.425 ] 00:23:08.425 }, 00:23:08.425 "memory_domains": [ 00:23:08.425 { 00:23:08.425 "dma_device_id": "system", 00:23:08.425 "dma_device_type": 1 00:23:08.425 } 00:23:08.425 ], 00:23:08.425 "name": "Nvme0n1", 00:23:08.425 "num_blocks": 38912, 00:23:08.425 "numa_id": -1, 00:23:08.425 "product_name": "NVMe disk", 00:23:08.425 "supported_io_types": { 00:23:08.425 "abort": true, 00:23:08.425 "compare": true, 00:23:08.425 "compare_and_write": true, 00:23:08.425 "copy": true, 00:23:08.425 "flush": true, 00:23:08.425 "get_zone_info": false, 00:23:08.425 "nvme_admin": true, 00:23:08.425 "nvme_io": true, 00:23:08.425 "nvme_io_md": false, 00:23:08.425 "nvme_iov_md": false, 00:23:08.425 "read": true, 00:23:08.425 "reset": true, 00:23:08.425 "seek_data": false, 00:23:08.425 "seek_hole": false, 00:23:08.425 "unmap": true, 00:23:08.425 "write": true, 00:23:08.425 "write_zeroes": true, 00:23:08.425 "zcopy": false, 00:23:08.425 "zone_append": false, 00:23:08.425 "zone_management": false 00:23:08.425 }, 00:23:08.425 "uuid": "2ae612cb-cd9b-48cc-94e7-ffc4eb7db307", 00:23:08.425 "zoned": false 00:23:08.425 } 00:23:08.425 ] 00:23:08.425 12:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.425 12:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102303 00:23:08.425 12:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:23:08.682 Running I/O for 10 seconds... 00:23:09.615 Latency(us) 00:23:09.615 [2024-12-05T12:26:40.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:09.615 Nvme0n1 : 1.00 6902.00 26.96 0.00 0.00 0.00 0.00 0.00 00:23:09.615 [2024-12-05T12:26:40.484Z] =================================================================================================================== 00:23:09.615 [2024-12-05T12:26:40.484Z] Total : 6902.00 26.96 0.00 0.00 0.00 0.00 0.00 00:23:09.615 00:23:10.549 12:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:10.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:10.549 Nvme0n1 : 2.00 7199.00 28.12 0.00 0.00 0.00 0.00 0.00 00:23:10.549 [2024-12-05T12:26:41.418Z] =================================================================================================================== 00:23:10.549 [2024-12-05T12:26:41.418Z] Total : 7199.00 28.12 0.00 0.00 0.00 0.00 0.00 00:23:10.549 00:23:10.807 true 00:23:10.807 12:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:10.807 12:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:23:11.373 12:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:23:11.373 12:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:23:11.373 12:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 102303 00:23:11.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:11.630 Nvme0n1 : 3.00 7280.00 28.44 0.00 0.00 0.00 0.00 0.00 00:23:11.630 [2024-12-05T12:26:42.499Z] =================================================================================================================== 00:23:11.630 [2024-12-05T12:26:42.499Z] Total : 7280.00 28.44 0.00 0.00 0.00 0.00 0.00 00:23:11.630 00:23:12.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:12.563 Nvme0n1 : 4.00 7334.00 28.65 0.00 0.00 0.00 0.00 0.00 00:23:12.563 [2024-12-05T12:26:43.432Z] =================================================================================================================== 00:23:12.563 [2024-12-05T12:26:43.432Z] Total : 7334.00 28.65 0.00 0.00 0.00 0.00 0.00 00:23:12.563 00:23:13.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:13.496 Nvme0n1 : 5.00 7323.60 28.61 0.00 0.00 0.00 0.00 0.00 00:23:13.496 [2024-12-05T12:26:44.365Z] =================================================================================================================== 00:23:13.496 [2024-12-05T12:26:44.365Z] Total : 7323.60 28.61 0.00 0.00 0.00 0.00 0.00 00:23:13.496 00:23:14.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:14.864 Nvme0n1 : 6.00 7324.17 28.61 0.00 0.00 0.00 0.00 0.00 00:23:14.864 [2024-12-05T12:26:45.733Z] =================================================================================================================== 00:23:14.865 [2024-12-05T12:26:45.734Z] Total : 7324.17 28.61 0.00 0.00 0.00 0.00 0.00 00:23:14.865 00:23:15.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:15.797 Nvme0n1 : 7.00 7319.43 28.59 0.00 0.00 0.00 0.00 0.00 00:23:15.797 [2024-12-05T12:26:46.666Z] =================================================================================================================== 00:23:15.797 [2024-12-05T12:26:46.666Z] Total : 7319.43 28.59 0.00 0.00 0.00 0.00 0.00 00:23:15.797 00:23:16.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:16.731 Nvme0n1 : 8.00 7141.12 27.90 0.00 0.00 0.00 0.00 0.00 00:23:16.731 [2024-12-05T12:26:47.600Z] =================================================================================================================== 00:23:16.731 [2024-12-05T12:26:47.600Z] Total : 7141.12 27.90 0.00 0.00 0.00 0.00 0.00 00:23:16.731 00:23:17.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:17.667 Nvme0n1 : 9.00 7114.22 27.79 0.00 0.00 0.00 0.00 0.00 00:23:17.667 [2024-12-05T12:26:48.536Z] =================================================================================================================== 00:23:17.667 [2024-12-05T12:26:48.536Z] Total : 7114.22 27.79 0.00 0.00 0.00 0.00 0.00 00:23:17.667 00:23:18.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:18.602 Nvme0n1 : 10.00 7094.70 27.71 0.00 0.00 0.00 0.00 0.00 00:23:18.602 [2024-12-05T12:26:49.471Z] =================================================================================================================== 00:23:18.602 [2024-12-05T12:26:49.471Z] Total : 7094.70 27.71 0.00 0.00 0.00 0.00 0.00 00:23:18.602 00:23:18.602 00:23:18.602 Latency(us) 00:23:18.602 [2024-12-05T12:26:49.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:18.602 Nvme0n1 : 10.01 7101.85 27.74 0.00 0.00 18018.17 8579.26 183977.43 00:23:18.602 [2024-12-05T12:26:49.471Z] =================================================================================================================== 00:23:18.602 [2024-12-05T12:26:49.471Z] Total : 7101.85 27.74 0.00 0.00 18018.17 8579.26 183977.43 00:23:18.602 { 00:23:18.602 "results": [ 00:23:18.602 { 00:23:18.602 "job": "Nvme0n1", 00:23:18.602 "core_mask": "0x2", 00:23:18.602 "workload": "randwrite", 00:23:18.602 "status": "finished", 00:23:18.602 "queue_depth": 128, 00:23:18.602 "io_size": 4096, 00:23:18.602 "runtime": 10.007951, 00:23:18.602 "iops": 7101.853316428108, 00:23:18.602 "mibps": 27.741614517297297, 00:23:18.602 "io_failed": 0, 00:23:18.602 "io_timeout": 0, 00:23:18.602 "avg_latency_us": 18018.17273625172, 00:23:18.602 "min_latency_us": 8579.258181818182, 00:23:18.602 "max_latency_us": 183977.42545454545 00:23:18.602 } 00:23:18.602 ], 00:23:18.602 "core_count": 1 00:23:18.602 } 00:23:18.602 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102255 00:23:18.603 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 102255 ']' 00:23:18.603 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 102255 00:23:18.603 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:23:18.603 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.603 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102255 00:23:18.603 killing process with pid 102255 00:23:18.603 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.603 00:23:18.603 Latency(us) 00:23:18.603 [2024-12-05T12:26:49.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.603 [2024-12-05T12:26:49.472Z] =================================================================================================================== 00:23:18.603 [2024-12-05T12:26:49.472Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.603 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:18.603 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:18.603 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102255' 00:23:18.603 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 102255 00:23:18.603 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 102255 00:23:18.861 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:19.119 12:26:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:19.376 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:19.376 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 101692 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 101692 00:23:19.634 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 101692 Killed "${NVMF_APP[@]}" "$@" 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=102455 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 102455 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102455 ']' 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.634 12:26:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:19.892 [2024-12-05 12:26:50.511861] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:19.892 [2024-12-05 12:26:50.513199] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:23:19.892 [2024-12-05 12:26:50.513312] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.892 [2024-12-05 12:26:50.662437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.892 [2024-12-05 12:26:50.706413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.892 [2024-12-05 12:26:50.706460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.892 [2024-12-05 12:26:50.706469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.892 [2024-12-05 12:26:50.706476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.892 [2024-12-05 12:26:50.706482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.892 [2024-12-05 12:26:50.706788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.151 [2024-12-05 12:26:50.791691] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:20.151 [2024-12-05 12:26:50.791940] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:20.718 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.718 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:23:20.718 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.718 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.718 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:20.718 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.718 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:20.977 [2024-12-05 12:26:51.788472] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:20.977 [2024-12-05 12:26:51.789096] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:20.977 [2024-12-05 12:26:51.789538] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:20.977 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:23:20.977 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2ae612cb-cd9b-48cc-94e7-ffc4eb7db307 00:23:20.977 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2ae612cb-cd9b-48cc-94e7-ffc4eb7db307 00:23:20.977 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:20.977 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:23:20.977 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:20.977 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:20.977 12:26:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:21.545 12:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ae612cb-cd9b-48cc-94e7-ffc4eb7db307 -t 2000 00:23:21.545 [ 00:23:21.545 { 00:23:21.545 "aliases": [ 00:23:21.545 "lvs/lvol" 00:23:21.545 ], 00:23:21.545 "assigned_rate_limits": { 00:23:21.545 "r_mbytes_per_sec": 0, 00:23:21.545 "rw_ios_per_sec": 0, 00:23:21.545 "rw_mbytes_per_sec": 0, 00:23:21.545 "w_mbytes_per_sec": 0 00:23:21.545 }, 00:23:21.545 "block_size": 4096, 00:23:21.545 "claimed": false, 00:23:21.545 "driver_specific": { 00:23:21.545 "lvol": { 00:23:21.545 "base_bdev": "aio_bdev", 00:23:21.545 "clone": false, 00:23:21.545 "esnap_clone": false, 00:23:21.545 "lvol_store_uuid": "64c2c986-b20f-4d43-ad17-38a2888dc4b4", 00:23:21.545 "num_allocated_clusters": 38, 00:23:21.545 "snapshot": false, 00:23:21.545 "thin_provision": false 00:23:21.545 } 00:23:21.545 }, 00:23:21.545 "name": "2ae612cb-cd9b-48cc-94e7-ffc4eb7db307", 00:23:21.545 "num_blocks": 38912, 00:23:21.545 "product_name": "Logical Volume", 00:23:21.545 "supported_io_types": { 00:23:21.545 "abort": false, 00:23:21.545 "compare": false, 00:23:21.545 "compare_and_write": false, 00:23:21.545 "copy": false, 00:23:21.545 "flush": false, 00:23:21.545 "get_zone_info": false, 00:23:21.545 "nvme_admin": false, 00:23:21.545 "nvme_io": false, 00:23:21.545 "nvme_io_md": false, 00:23:21.545 "nvme_iov_md": false, 00:23:21.545 "read": true, 00:23:21.545 "reset": true, 00:23:21.545 "seek_data": true, 00:23:21.545 "seek_hole": true, 00:23:21.545 "unmap": true, 00:23:21.545 "write": true, 00:23:21.545 "write_zeroes": true, 00:23:21.545 "zcopy": false, 00:23:21.545 "zone_append": false, 00:23:21.545 "zone_management": false 00:23:21.545 }, 00:23:21.545 "uuid": "2ae612cb-cd9b-48cc-94e7-ffc4eb7db307", 00:23:21.545 "zoned": false 00:23:21.545 } 00:23:21.545 ] 00:23:21.545 12:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:23:21.545 12:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:21.545 12:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:23:21.804 12:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:23:21.804 12:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:21.804 12:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:23:22.372 12:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:23:22.372 12:26:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:22.372 [2024-12-05 12:26:53.155498] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:23:22.372 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:22.372 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:23:22.372 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:22.372 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:22.372 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.372 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:22.372 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.372 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:22.372 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.372 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:22.372 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:22.372 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:22.631 2024/12/05 12:26:53 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:64c2c986-b20f-4d43-ad17-38a2888dc4b4], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:23:22.631 request: 00:23:22.631 { 00:23:22.631 "method": "bdev_lvol_get_lvstores", 00:23:22.631 "params": { 00:23:22.631 "uuid": "64c2c986-b20f-4d43-ad17-38a2888dc4b4" 00:23:22.631 } 00:23:22.631 } 00:23:22.631 Got JSON-RPC error response 00:23:22.631 GoRPCClient: error on JSON-RPC call 00:23:22.631 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:23:22.631 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:22.631 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:22.631 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:22.631 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:22.889 aio_bdev 00:23:22.889 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2ae612cb-cd9b-48cc-94e7-ffc4eb7db307 00:23:22.890 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2ae612cb-cd9b-48cc-94e7-ffc4eb7db307 00:23:22.890 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:22.890 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:23:22.890 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:22.890 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:22.890 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:23.148 12:26:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ae612cb-cd9b-48cc-94e7-ffc4eb7db307 -t 2000 00:23:23.415 [ 00:23:23.415 { 00:23:23.415 "aliases": [ 00:23:23.415 "lvs/lvol" 00:23:23.415 ], 00:23:23.416 "assigned_rate_limits": { 00:23:23.416 "r_mbytes_per_sec": 0, 00:23:23.416 "rw_ios_per_sec": 0, 00:23:23.416 "rw_mbytes_per_sec": 0, 00:23:23.416 "w_mbytes_per_sec": 0 00:23:23.416 }, 00:23:23.416 "block_size": 4096, 00:23:23.416 "claimed": false, 00:23:23.416 "driver_specific": { 00:23:23.416 "lvol": { 00:23:23.416 "base_bdev": "aio_bdev", 00:23:23.416 "clone": false, 00:23:23.416 "esnap_clone": false, 00:23:23.416 "lvol_store_uuid": "64c2c986-b20f-4d43-ad17-38a2888dc4b4", 00:23:23.416 "num_allocated_clusters": 38, 00:23:23.416 "snapshot": false, 00:23:23.416 "thin_provision": false 00:23:23.416 } 00:23:23.416 }, 00:23:23.416 "name": "2ae612cb-cd9b-48cc-94e7-ffc4eb7db307", 00:23:23.416 "num_blocks": 38912, 00:23:23.416 "product_name": "Logical Volume", 00:23:23.416 "supported_io_types": { 00:23:23.416 "abort": false, 00:23:23.416 "compare": false, 00:23:23.416 "compare_and_write": false, 00:23:23.416 "copy": false, 00:23:23.416 "flush": false, 00:23:23.416 "get_zone_info": false, 00:23:23.416 "nvme_admin": false, 00:23:23.416 "nvme_io": false, 00:23:23.416 "nvme_io_md": false, 00:23:23.416 "nvme_iov_md": false, 00:23:23.416 "read": true, 00:23:23.416 "reset": true, 00:23:23.416 "seek_data": true, 00:23:23.416 "seek_hole": true, 00:23:23.416 "unmap": true, 00:23:23.416 "write": true, 00:23:23.416 "write_zeroes": true, 00:23:23.416 "zcopy": false, 00:23:23.416 "zone_append": false, 00:23:23.416 "zone_management": false 00:23:23.416 }, 00:23:23.416 "uuid": "2ae612cb-cd9b-48cc-94e7-ffc4eb7db307", 00:23:23.416 "zoned": false 00:23:23.416 } 00:23:23.416 ] 00:23:23.416 12:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:23:23.416 12:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:23:23.416 12:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:23.748 12:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:23:23.748 12:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:23.748 12:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:23:24.042 12:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:23:24.042 12:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2ae612cb-cd9b-48cc-94e7-ffc4eb7db307 00:23:24.042 12:26:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 64c2c986-b20f-4d43-ad17-38a2888dc4b4 00:23:24.300 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:24.558 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:25.125 00:23:25.125 real 0m20.398s 00:23:25.125 user 0m26.298s 00:23:25.125 sys 0m9.394s 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.125 ************************************ 00:23:25.125 END TEST lvs_grow_dirty 00:23:25.125 ************************************ 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:25.125 nvmf_trace.0 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:25.125 12:26:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:23:25.693 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:25.694 rmmod nvme_tcp 00:23:25.694 rmmod nvme_fabrics 00:23:25.694 rmmod nvme_keyring 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 102455 ']' 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 102455 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 102455 ']' 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 102455 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102455 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:25.694 killing process with pid 102455 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102455' 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 102455 00:23:25.694 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 102455 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:25.953 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:23:26.212 00:23:26.212 real 0m40.652s 00:23:26.212 user 0m44.347s 00:23:26.212 sys 0m12.848s 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.212 ************************************ 00:23:26.212 END TEST nvmf_lvs_grow 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:23:26.212 ************************************ 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:26.212 ************************************ 00:23:26.212 START TEST nvmf_bdev_io_wait 00:23:26.212 ************************************ 00:23:26.212 12:26:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:23:26.212 * Looking for test storage... 00:23:26.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:26.212 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:26.212 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:23:26.212 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:26.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.472 --rc genhtml_branch_coverage=1 00:23:26.472 --rc genhtml_function_coverage=1 00:23:26.472 --rc genhtml_legend=1 00:23:26.472 --rc geninfo_all_blocks=1 00:23:26.472 --rc geninfo_unexecuted_blocks=1 00:23:26.472 00:23:26.472 ' 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:26.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.472 --rc genhtml_branch_coverage=1 00:23:26.472 --rc genhtml_function_coverage=1 00:23:26.472 --rc genhtml_legend=1 00:23:26.472 --rc geninfo_all_blocks=1 00:23:26.472 --rc geninfo_unexecuted_blocks=1 00:23:26.472 00:23:26.472 ' 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:26.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.472 --rc genhtml_branch_coverage=1 00:23:26.472 --rc genhtml_function_coverage=1 00:23:26.472 --rc genhtml_legend=1 00:23:26.472 --rc geninfo_all_blocks=1 00:23:26.472 --rc geninfo_unexecuted_blocks=1 00:23:26.472 00:23:26.472 ' 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:26.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.472 --rc genhtml_branch_coverage=1 00:23:26.472 --rc genhtml_function_coverage=1 00:23:26.472 --rc genhtml_legend=1 00:23:26.472 --rc geninfo_all_blocks=1 00:23:26.472 --rc geninfo_unexecuted_blocks=1 00:23:26.472 00:23:26.472 ' 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.472 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:26.473 Cannot find device "nvmf_init_br" 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:26.473 Cannot find device "nvmf_init_br2" 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:26.473 Cannot find device "nvmf_tgt_br" 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:26.473 Cannot find device "nvmf_tgt_br2" 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:26.473 Cannot find device "nvmf_init_br" 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:26.473 Cannot find device "nvmf_init_br2" 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:26.473 Cannot find device "nvmf_tgt_br" 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:26.473 Cannot find device "nvmf_tgt_br2" 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:26.473 Cannot find device "nvmf_br" 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:26.473 Cannot find device "nvmf_init_if" 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:23:26.473 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:26.732 Cannot find device "nvmf_init_if2" 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:26.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:26.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:26.732 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:26.990 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:26.990 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:26.990 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:26.990 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:26.990 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:26.990 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:26.990 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:26.990 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:26.990 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:26.990 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:23:26.990 00:23:26.990 --- 10.0.0.3 ping statistics --- 00:23:26.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.990 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:26.990 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:26.990 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:26.991 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:23:26.991 00:23:26.991 --- 10.0.0.4 ping statistics --- 00:23:26.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.991 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:26.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:23:26.991 00:23:26.991 --- 10.0.0.1 ping statistics --- 00:23:26.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.991 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:26.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:23:26.991 00:23:26.991 --- 10.0.0.2 ping statistics --- 00:23:26.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.991 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=102933 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 102933 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 102933 ']' 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.991 12:26:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:26.991 [2024-12-05 12:26:57.753295] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:26.991 [2024-12-05 12:26:57.754682] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:23:26.991 [2024-12-05 12:26:57.754757] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.249 [2024-12-05 12:26:57.910599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.249 [2024-12-05 12:26:57.968160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.249 [2024-12-05 12:26:57.968227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.249 [2024-12-05 12:26:57.968242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.249 [2024-12-05 12:26:57.968253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.249 [2024-12-05 12:26:57.968262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.249 [2024-12-05 12:26:57.969675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.249 [2024-12-05 12:26:57.969836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.249 [2024-12-05 12:26:57.969928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.249 [2024-12-05 12:26:57.969920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.249 [2024-12-05 12:26:57.970545] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.249 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:27.509 [2024-12-05 12:26:58.162404] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:27.509 [2024-12-05 12:26:58.162727] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:27.509 [2024-12-05 12:26:58.163980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:27.509 [2024-12-05 12:26:58.164469] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:27.509 [2024-12-05 12:26:58.174798] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:27.509 Malloc0 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:27.509 [2024-12-05 12:26:58.247015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=102968 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=102970 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.509 { 00:23:27.509 "params": { 00:23:27.509 "name": "Nvme$subsystem", 00:23:27.509 "trtype": "$TEST_TRANSPORT", 00:23:27.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.509 "adrfam": "ipv4", 00:23:27.509 "trsvcid": "$NVMF_PORT", 00:23:27.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.509 "hdgst": ${hdgst:-false}, 00:23:27.509 "ddgst": ${ddgst:-false} 00:23:27.509 }, 00:23:27.509 "method": "bdev_nvme_attach_controller" 00:23:27.509 } 00:23:27.509 EOF 00:23:27.509 )") 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=102972 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=102975 00:23:27.509 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.509 { 00:23:27.509 "params": { 00:23:27.509 "name": "Nvme$subsystem", 00:23:27.509 "trtype": "$TEST_TRANSPORT", 00:23:27.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.509 "adrfam": "ipv4", 00:23:27.509 "trsvcid": "$NVMF_PORT", 00:23:27.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.509 "hdgst": ${hdgst:-false}, 00:23:27.509 "ddgst": ${ddgst:-false} 00:23:27.509 }, 00:23:27.510 "method": "bdev_nvme_attach_controller" 00:23:27.510 } 00:23:27.510 EOF 00:23:27.510 )") 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.510 { 00:23:27.510 "params": { 00:23:27.510 "name": "Nvme$subsystem", 00:23:27.510 "trtype": "$TEST_TRANSPORT", 00:23:27.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.510 "adrfam": "ipv4", 00:23:27.510 "trsvcid": "$NVMF_PORT", 00:23:27.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.510 "hdgst": ${hdgst:-false}, 00:23:27.510 "ddgst": ${ddgst:-false} 00:23:27.510 }, 00:23:27.510 "method": "bdev_nvme_attach_controller" 00:23:27.510 } 00:23:27.510 EOF 00:23:27.510 )") 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.510 { 00:23:27.510 "params": { 00:23:27.510 "name": "Nvme$subsystem", 00:23:27.510 "trtype": "$TEST_TRANSPORT", 00:23:27.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.510 "adrfam": "ipv4", 00:23:27.510 "trsvcid": "$NVMF_PORT", 00:23:27.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.510 "hdgst": ${hdgst:-false}, 00:23:27.510 "ddgst": ${ddgst:-false} 00:23:27.510 }, 00:23:27.510 "method": "bdev_nvme_attach_controller" 00:23:27.510 } 00:23:27.510 EOF 00:23:27.510 )") 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:27.510 "params": { 00:23:27.510 "name": "Nvme1", 00:23:27.510 "trtype": "tcp", 00:23:27.510 "traddr": "10.0.0.3", 00:23:27.510 "adrfam": "ipv4", 00:23:27.510 "trsvcid": "4420", 00:23:27.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.510 "hdgst": false, 00:23:27.510 "ddgst": false 00:23:27.510 }, 00:23:27.510 "method": "bdev_nvme_attach_controller" 00:23:27.510 }' 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:27.510 "params": { 00:23:27.510 "name": "Nvme1", 00:23:27.510 "trtype": "tcp", 00:23:27.510 "traddr": "10.0.0.3", 00:23:27.510 "adrfam": "ipv4", 00:23:27.510 "trsvcid": "4420", 00:23:27.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.510 "hdgst": false, 00:23:27.510 "ddgst": false 00:23:27.510 }, 00:23:27.510 "method": "bdev_nvme_attach_controller" 00:23:27.510 }' 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:27.510 "params": { 00:23:27.510 "name": "Nvme1", 00:23:27.510 "trtype": "tcp", 00:23:27.510 "traddr": "10.0.0.3", 00:23:27.510 "adrfam": "ipv4", 00:23:27.510 "trsvcid": "4420", 00:23:27.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.510 "hdgst": false, 00:23:27.510 "ddgst": false 00:23:27.510 }, 00:23:27.510 "method": "bdev_nvme_attach_controller" 00:23:27.510 }' 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:27.510 "params": { 00:23:27.510 "name": "Nvme1", 00:23:27.510 "trtype": "tcp", 00:23:27.510 "traddr": "10.0.0.3", 00:23:27.510 "adrfam": "ipv4", 00:23:27.510 "trsvcid": "4420", 00:23:27.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.510 "hdgst": false, 00:23:27.510 "ddgst": false 00:23:27.510 }, 00:23:27.510 "method": "bdev_nvme_attach_controller" 00:23:27.510 }' 00:23:27.510 [2024-12-05 12:26:58.307378] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:23:27.510 [2024-12-05 12:26:58.307445] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:23:27.510 [2024-12-05 12:26:58.314733] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:23:27.510 [2024-12-05 12:26:58.314823] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:27.510 [2024-12-05 12:26:58.322011] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:23:27.510 [2024-12-05 12:26:58.322095] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:23:27.510 12:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 102968 00:23:27.510 [2024-12-05 12:26:58.351363] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:23:27.510 [2024-12-05 12:26:58.351799] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:23:27.772 [2024-12-05 12:26:58.544721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.772 [2024-12-05 12:26:58.602910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:27.772 [2024-12-05 12:26:58.619904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.031 [2024-12-05 12:26:58.679572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:28.031 [2024-12-05 12:26:58.709570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.031 [2024-12-05 12:26:58.766613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:28.031 [2024-12-05 12:26:58.791454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.031 Running I/O for 1 seconds... 00:23:28.031 Running I/O for 1 seconds... 00:23:28.031 [2024-12-05 12:26:58.849997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:23:28.290 Running I/O for 1 seconds... 00:23:28.290 Running I/O for 1 seconds... 00:23:29.225 8930.00 IOPS, 34.88 MiB/s 00:23:29.225 Latency(us) 00:23:29.225 [2024-12-05T12:27:00.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.225 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:23:29.225 Nvme1n1 : 1.01 8977.78 35.07 0.00 0.00 14188.95 3961.95 18707.55 00:23:29.225 [2024-12-05T12:27:00.094Z] =================================================================================================================== 00:23:29.225 [2024-12-05T12:27:00.094Z] Total : 8977.78 35.07 0.00 0.00 14188.95 3961.95 18707.55 00:23:29.225 6184.00 IOPS, 24.16 MiB/s 00:23:29.225 Latency(us) 00:23:29.225 [2024-12-05T12:27:00.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.225 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:23:29.225 Nvme1n1 : 1.01 6263.94 24.47 0.00 0.00 20332.69 7983.48 35508.60 00:23:29.225 [2024-12-05T12:27:00.094Z] =================================================================================================================== 00:23:29.225 [2024-12-05T12:27:00.094Z] Total : 6263.94 24.47 0.00 0.00 20332.69 7983.48 35508.60 00:23:29.225 220616.00 IOPS, 861.78 MiB/s 00:23:29.225 Latency(us) 00:23:29.225 [2024-12-05T12:27:00.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.225 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:23:29.225 Nvme1n1 : 1.00 220247.67 860.34 0.00 0.00 578.00 266.24 1660.74 00:23:29.225 [2024-12-05T12:27:00.094Z] =================================================================================================================== 00:23:29.225 [2024-12-05T12:27:00.094Z] Total : 220247.67 860.34 0.00 0.00 578.00 266.24 1660.74 00:23:29.225 12:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 102970 00:23:29.225 12:26:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 102972 00:23:29.225 6596.00 IOPS, 25.77 MiB/s 00:23:29.225 Latency(us) 00:23:29.225 [2024-12-05T12:27:00.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.225 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:23:29.225 Nvme1n1 : 1.01 6711.00 26.21 0.00 0.00 19004.88 3053.38 26571.87 00:23:29.225 [2024-12-05T12:27:00.094Z] =================================================================================================================== 00:23:29.225 [2024-12-05T12:27:00.094Z] Total : 6711.00 26.21 0.00 0.00 19004.88 3053.38 26571.87 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 102975 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.483 rmmod nvme_tcp 00:23:29.483 rmmod nvme_fabrics 00:23:29.483 rmmod nvme_keyring 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:29.483 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 102933 ']' 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 102933 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 102933 ']' 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 102933 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102933 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:29.484 killing process with pid 102933 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102933' 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 102933 00:23:29.484 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 102933 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:29.742 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:23:30.001 00:23:30.001 real 0m3.753s 00:23:30.001 user 0m13.067s 00:23:30.001 sys 0m2.534s 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:23:30.001 ************************************ 00:23:30.001 END TEST nvmf_bdev_io_wait 00:23:30.001 ************************************ 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:30.001 ************************************ 00:23:30.001 START TEST nvmf_queue_depth 00:23:30.001 ************************************ 00:23:30.001 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:23:30.001 * Looking for test storage... 00:23:30.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.261 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:30.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.262 --rc genhtml_branch_coverage=1 00:23:30.262 --rc genhtml_function_coverage=1 00:23:30.262 --rc genhtml_legend=1 00:23:30.262 --rc geninfo_all_blocks=1 00:23:30.262 --rc geninfo_unexecuted_blocks=1 00:23:30.262 00:23:30.262 ' 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:30.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.262 --rc genhtml_branch_coverage=1 00:23:30.262 --rc genhtml_function_coverage=1 00:23:30.262 --rc genhtml_legend=1 00:23:30.262 --rc geninfo_all_blocks=1 00:23:30.262 --rc geninfo_unexecuted_blocks=1 00:23:30.262 00:23:30.262 ' 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:30.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.262 --rc genhtml_branch_coverage=1 00:23:30.262 --rc genhtml_function_coverage=1 00:23:30.262 --rc genhtml_legend=1 00:23:30.262 --rc geninfo_all_blocks=1 00:23:30.262 --rc geninfo_unexecuted_blocks=1 00:23:30.262 00:23:30.262 ' 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:30.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.262 --rc genhtml_branch_coverage=1 00:23:30.262 --rc genhtml_function_coverage=1 00:23:30.262 --rc genhtml_legend=1 00:23:30.262 --rc geninfo_all_blocks=1 00:23:30.262 --rc geninfo_unexecuted_blocks=1 00:23:30.262 00:23:30.262 ' 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:30.262 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:30.263 12:27:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:30.263 Cannot find device "nvmf_init_br" 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:30.263 Cannot find device "nvmf_init_br2" 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:30.263 Cannot find device "nvmf_tgt_br" 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:30.263 Cannot find device "nvmf_tgt_br2" 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:30.263 Cannot find device "nvmf_init_br" 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:30.263 Cannot find device "nvmf_init_br2" 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:30.263 Cannot find device "nvmf_tgt_br" 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:30.263 Cannot find device "nvmf_tgt_br2" 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:30.263 Cannot find device "nvmf_br" 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:30.263 Cannot find device "nvmf_init_if" 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:30.263 Cannot find device "nvmf_init_if2" 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:30.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:23:30.263 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:30.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:30.523 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:30.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:30.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:23:30.782 00:23:30.782 --- 10.0.0.3 ping statistics --- 00:23:30.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.782 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:30.782 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:30.782 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.138 ms 00:23:30.782 00:23:30.782 --- 10.0.0.4 ping statistics --- 00:23:30.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.782 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:30.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:23:30.782 00:23:30.782 --- 10.0.0.1 ping statistics --- 00:23:30.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.782 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:30.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:23:30.782 00:23:30.782 --- 10.0.0.2 ping statistics --- 00:23:30.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.782 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=103233 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 103233 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103233 ']' 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.782 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:30.782 [2024-12-05 12:27:01.493179] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:30.782 [2024-12-05 12:27:01.494035] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:23:30.782 [2024-12-05 12:27:01.494108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.782 [2024-12-05 12:27:01.639494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.042 [2024-12-05 12:27:01.705592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.042 [2024-12-05 12:27:01.705668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.042 [2024-12-05 12:27:01.705683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.042 [2024-12-05 12:27:01.705694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.042 [2024-12-05 12:27:01.705704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.042 [2024-12-05 12:27:01.706173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.042 [2024-12-05 12:27:01.827632] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:31.042 [2024-12-05 12:27:01.827957] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:31.042 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.042 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:23:31.042 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:31.042 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:31.042 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:31.042 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.042 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.042 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.042 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:31.300 [2024-12-05 12:27:01.911132] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:31.300 Malloc0 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:31.300 [2024-12-05 12:27:01.987072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=103271 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 103271 /var/tmp/bdevperf.sock 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103271 ']' 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.300 12:27:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:31.300 [2024-12-05 12:27:02.057111] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:23:31.300 [2024-12-05 12:27:02.057223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103271 ] 00:23:31.558 [2024-12-05 12:27:02.210197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.558 [2024-12-05 12:27:02.264274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.558 12:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.558 12:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:23:31.558 12:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.558 12:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.558 12:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:31.816 NVMe0n1 00:23:31.816 12:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.816 12:27:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:31.816 Running I/O for 10 seconds... 00:23:34.125 9487.00 IOPS, 37.06 MiB/s [2024-12-05T12:27:05.579Z] 9967.50 IOPS, 38.94 MiB/s [2024-12-05T12:27:06.978Z] 10165.33 IOPS, 39.71 MiB/s [2024-12-05T12:27:07.910Z] 10248.25 IOPS, 40.03 MiB/s [2024-12-05T12:27:08.843Z] 10404.00 IOPS, 40.64 MiB/s [2024-12-05T12:27:09.775Z] 10476.17 IOPS, 40.92 MiB/s [2024-12-05T12:27:10.711Z] 10588.29 IOPS, 41.36 MiB/s [2024-12-05T12:27:11.643Z] 10641.00 IOPS, 41.57 MiB/s [2024-12-05T12:27:12.577Z] 10702.33 IOPS, 41.81 MiB/s [2024-12-05T12:27:12.835Z] 10759.40 IOPS, 42.03 MiB/s 00:23:41.966 Latency(us) 00:23:41.966 [2024-12-05T12:27:12.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.966 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:23:41.966 Verification LBA range: start 0x0 length 0x4000 00:23:41.966 NVMe0n1 : 10.07 10786.02 42.13 0.00 0.00 94604.92 21924.77 71017.19 00:23:41.966 [2024-12-05T12:27:12.836Z] =================================================================================================================== 00:23:41.967 [2024-12-05T12:27:12.836Z] Total : 10786.02 42.13 0.00 0.00 94604.92 21924.77 71017.19 00:23:41.967 { 00:23:41.967 "results": [ 00:23:41.967 { 00:23:41.967 "job": "NVMe0n1", 00:23:41.967 "core_mask": "0x1", 00:23:41.967 "workload": "verify", 00:23:41.967 "status": "finished", 00:23:41.967 "verify_range": { 00:23:41.967 "start": 0, 00:23:41.967 "length": 16384 00:23:41.967 }, 00:23:41.967 "queue_depth": 1024, 00:23:41.967 "io_size": 4096, 00:23:41.967 "runtime": 10.06738, 00:23:41.967 "iops": 10786.02377182544, 00:23:41.967 "mibps": 42.13290535869312, 00:23:41.967 "io_failed": 0, 00:23:41.967 "io_timeout": 0, 00:23:41.967 "avg_latency_us": 94604.91514520824, 00:23:41.967 "min_latency_us": 21924.77090909091, 00:23:41.967 "max_latency_us": 71017.19272727273 00:23:41.967 } 00:23:41.967 ], 00:23:41.967 "core_count": 1 00:23:41.967 } 00:23:41.967 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 103271 00:23:41.967 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103271 ']' 00:23:41.967 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103271 00:23:41.967 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:23:41.967 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.967 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103271 00:23:41.967 killing process with pid 103271 00:23:41.967 Received shutdown signal, test time was about 10.000000 seconds 00:23:41.967 00:23:41.967 Latency(us) 00:23:41.967 [2024-12-05T12:27:12.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.967 [2024-12-05T12:27:12.836Z] =================================================================================================================== 00:23:41.967 [2024-12-05T12:27:12.836Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.967 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:41.967 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:41.967 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103271' 00:23:41.967 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103271 00:23:41.967 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103271 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:42.225 rmmod nvme_tcp 00:23:42.225 rmmod nvme_fabrics 00:23:42.225 rmmod nvme_keyring 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 103233 ']' 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 103233 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103233 ']' 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103233 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103233 00:23:42.225 killing process with pid 103233 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103233' 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103233 00:23:42.225 12:27:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103233 00:23:42.483 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:42.483 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:42.483 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:42.483 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:23:42.483 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:23:42.483 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:23:42.483 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:42.483 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.483 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:42.483 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:42.484 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:42.484 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:42.484 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:23:42.743 00:23:42.743 real 0m12.746s 00:23:42.743 user 0m20.436s 00:23:42.743 sys 0m2.717s 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:23:42.743 ************************************ 00:23:42.743 END TEST nvmf_queue_depth 00:23:42.743 ************************************ 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:42.743 ************************************ 00:23:42.743 START TEST nvmf_target_multipath 00:23:42.743 ************************************ 00:23:42.743 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:23:43.002 * Looking for test storage... 00:23:43.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:43.002 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:43.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.003 --rc genhtml_branch_coverage=1 00:23:43.003 --rc genhtml_function_coverage=1 00:23:43.003 --rc genhtml_legend=1 00:23:43.003 --rc geninfo_all_blocks=1 00:23:43.003 --rc geninfo_unexecuted_blocks=1 00:23:43.003 00:23:43.003 ' 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:43.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.003 --rc genhtml_branch_coverage=1 00:23:43.003 --rc genhtml_function_coverage=1 00:23:43.003 --rc genhtml_legend=1 00:23:43.003 --rc geninfo_all_blocks=1 00:23:43.003 --rc geninfo_unexecuted_blocks=1 00:23:43.003 00:23:43.003 ' 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:43.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.003 --rc genhtml_branch_coverage=1 00:23:43.003 --rc genhtml_function_coverage=1 00:23:43.003 --rc genhtml_legend=1 00:23:43.003 --rc geninfo_all_blocks=1 00:23:43.003 --rc geninfo_unexecuted_blocks=1 00:23:43.003 00:23:43.003 ' 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:43.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.003 --rc genhtml_branch_coverage=1 00:23:43.003 --rc genhtml_function_coverage=1 00:23:43.003 --rc genhtml_legend=1 00:23:43.003 --rc geninfo_all_blocks=1 00:23:43.003 --rc geninfo_unexecuted_blocks=1 00:23:43.003 00:23:43.003 ' 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.003 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:43.004 Cannot find device "nvmf_init_br" 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:43.004 Cannot find device "nvmf_init_br2" 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:43.004 Cannot find device "nvmf_tgt_br" 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:43.004 Cannot find device "nvmf_tgt_br2" 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:23:43.004 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:43.262 Cannot find device "nvmf_init_br" 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:43.262 Cannot find device "nvmf_init_br2" 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:43.262 Cannot find device "nvmf_tgt_br" 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:43.262 Cannot find device "nvmf_tgt_br2" 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:43.262 Cannot find device "nvmf_br" 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:43.262 Cannot find device "nvmf_init_if" 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:43.262 Cannot find device "nvmf_init_if2" 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:43.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:43.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:43.262 12:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:43.262 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:43.520 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:43.520 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:43.520 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:43.520 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:43.520 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:43.520 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:43.520 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:43.520 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:43.520 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:43.520 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:43.520 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:23:43.520 00:23:43.520 --- 10.0.0.3 ping statistics --- 00:23:43.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.520 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:23:43.520 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:43.520 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:43.521 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:23:43.521 00:23:43.521 --- 10.0.0.4 ping statistics --- 00:23:43.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.521 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:43.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:23:43.521 00:23:43.521 --- 10.0.0.1 ping statistics --- 00:23:43.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.521 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:43.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:23:43.521 00:23:43.521 --- 10.0.0.2 ping statistics --- 00:23:43.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.521 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=103634 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 103634 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 103634 ']' 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.521 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:43.521 [2024-12-05 12:27:14.282681] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:43.521 [2024-12-05 12:27:14.283952] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:23:43.521 [2024-12-05 12:27:14.284023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.780 [2024-12-05 12:27:14.437380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.780 [2024-12-05 12:27:14.495224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.780 [2024-12-05 12:27:14.495322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.780 [2024-12-05 12:27:14.495339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.780 [2024-12-05 12:27:14.495351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.780 [2024-12-05 12:27:14.495360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.780 [2024-12-05 12:27:14.496743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.780 [2024-12-05 12:27:14.496854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.780 [2024-12-05 12:27:14.496964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.780 [2024-12-05 12:27:14.496972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.780 [2024-12-05 12:27:14.603083] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:43.780 [2024-12-05 12:27:14.603615] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:43.780 [2024-12-05 12:27:14.603711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:43.780 [2024-12-05 12:27:14.603946] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:23:43.780 [2024-12-05 12:27:14.604468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:43.780 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.780 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:23:43.780 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:43.781 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:43.781 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:44.039 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.039 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:44.299 [2024-12-05 12:27:14.934608] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.299 12:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:44.557 Malloc0 00:23:44.557 12:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:23:44.814 12:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:45.072 12:27:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:45.330 [2024-12-05 12:27:16.034522] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:45.330 12:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:23:45.587 [2024-12-05 12:27:16.266514] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:23:45.587 12:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:23:45.587 12:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:23:45.845 12:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:23:45.845 12:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:23:45.845 12:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:23:45.845 12:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:23:45.845 12:27:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=103754 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:23:47.747 12:27:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:23:47.747 [global] 00:23:47.747 thread=1 00:23:47.747 invalidate=1 00:23:47.747 rw=randrw 00:23:47.747 time_based=1 00:23:47.747 runtime=6 00:23:47.747 ioengine=libaio 00:23:47.747 direct=1 00:23:47.747 bs=4096 00:23:47.747 iodepth=128 00:23:47.747 norandommap=0 00:23:47.747 numjobs=1 00:23:47.747 00:23:47.747 verify_dump=1 00:23:47.747 verify_backlog=512 00:23:47.747 verify_state_save=0 00:23:47.747 do_verify=1 00:23:47.747 verify=crc32c-intel 00:23:47.747 [job0] 00:23:47.747 filename=/dev/nvme0n1 00:23:47.747 Could not set queue depth (nvme0n1) 00:23:48.006 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:48.006 fio-3.35 00:23:48.006 Starting 1 thread 00:23:48.939 12:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:49.196 12:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:49.454 12:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:23:50.390 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:23:50.390 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:50.390 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:50.390 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:50.648 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:50.906 12:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:23:51.915 12:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:23:51.915 12:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:51.915 12:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:51.915 12:27:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 103754 00:23:54.451 00:23:54.451 job0: (groupid=0, jobs=1): err= 0: pid=103775: Thu Dec 5 12:27:24 2024 00:23:54.451 read: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(312MiB/6046msec) 00:23:54.451 slat (usec): min=3, max=7268, avg=43.12, stdev=206.40 00:23:54.451 clat (usec): min=1404, max=51961, avg=6508.80, stdev=1629.30 00:23:54.451 lat (usec): min=1415, max=52005, avg=6551.92, stdev=1637.23 00:23:54.451 clat percentiles (usec): 00:23:54.451 | 1.00th=[ 4015], 5.00th=[ 4817], 10.00th=[ 5473], 20.00th=[ 5800], 00:23:54.451 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6325], 60.00th=[ 6521], 00:23:54.451 | 70.00th=[ 6783], 80.00th=[ 7177], 90.00th=[ 7701], 95.00th=[ 8356], 00:23:54.451 | 99.00th=[ 9896], 99.50th=[10945], 99.90th=[16909], 99.95th=[47973], 00:23:54.451 | 99.99th=[51643] 00:23:54.451 bw ( KiB/s): min=14768, max=32424, per=51.60%, avg=27275.42, stdev=6560.25, samples=12 00:23:54.451 iops : min= 3692, max= 8106, avg=6818.83, stdev=1640.10, samples=12 00:23:54.451 write: IOPS=7853, BW=30.7MiB/s (32.2MB/s)(161MiB/5233msec); 0 zone resets 00:23:54.451 slat (usec): min=11, max=2317, avg=53.31, stdev=114.73 00:23:54.451 clat (usec): min=1306, max=51547, avg=6003.61, stdev=1913.61 00:23:54.451 lat (usec): min=1338, max=51573, avg=6056.92, stdev=1915.57 00:23:54.452 clat percentiles (usec): 00:23:54.452 | 1.00th=[ 3359], 5.00th=[ 4555], 10.00th=[ 5145], 20.00th=[ 5473], 00:23:54.452 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:23:54.452 | 70.00th=[ 6194], 80.00th=[ 6390], 90.00th=[ 6718], 95.00th=[ 7111], 00:23:54.452 | 99.00th=[ 9110], 99.50th=[10290], 99.90th=[47449], 99.95th=[49546], 00:23:54.452 | 99.99th=[51119] 00:23:54.452 bw ( KiB/s): min=15248, max=32768, per=87.07%, avg=27351.25, stdev=6229.78, samples=12 00:23:54.452 iops : min= 3812, max= 8192, avg=6837.75, stdev=1557.57, samples=12 00:23:54.452 lat (msec) : 2=0.01%, 4=1.62%, 10=97.60%, 20=0.67%, 50=0.07% 00:23:54.452 lat (msec) : 100=0.04% 00:23:54.452 cpu : usr=6.09%, sys=25.16%, ctx=9382, majf=0, minf=102 00:23:54.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:54.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:54.452 issued rwts: total=79892,41095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:54.452 00:23:54.452 Run status group 0 (all jobs): 00:23:54.452 READ: bw=51.6MiB/s (54.1MB/s), 51.6MiB/s-51.6MiB/s (54.1MB/s-54.1MB/s), io=312MiB (327MB), run=6046-6046msec 00:23:54.452 WRITE: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=161MiB (168MB), run=5233-5233msec 00:23:54.452 00:23:54.452 Disk stats (read/write): 00:23:54.452 nvme0n1: ios=78902/40231, merge=0/0, ticks=475374/227238, in_queue=702612, util=98.65% 00:23:54.452 12:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:23:54.452 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:23:54.711 12:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:23:55.644 12:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:23:55.644 12:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:55.644 12:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:23:55.644 12:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:23:55.644 12:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=103903 00:23:55.644 12:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:23:55.644 12:27:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:23:55.644 [global] 00:23:55.644 thread=1 00:23:55.644 invalidate=1 00:23:55.644 rw=randrw 00:23:55.644 time_based=1 00:23:55.644 runtime=6 00:23:55.644 ioengine=libaio 00:23:55.644 direct=1 00:23:55.644 bs=4096 00:23:55.645 iodepth=128 00:23:55.645 norandommap=0 00:23:55.645 numjobs=1 00:23:55.645 00:23:55.645 verify_dump=1 00:23:55.645 verify_backlog=512 00:23:55.645 verify_state_save=0 00:23:55.645 do_verify=1 00:23:55.645 verify=crc32c-intel 00:23:55.645 [job0] 00:23:55.645 filename=/dev/nvme0n1 00:23:55.645 Could not set queue depth (nvme0n1) 00:23:55.903 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:55.903 fio-3.35 00:23:55.903 Starting 1 thread 00:23:56.838 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:57.098 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:23:57.356 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:23:57.356 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:23:57.356 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:57.356 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:57.356 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:57.357 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:57.357 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:23:57.357 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:23:57.357 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:57.357 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:57.357 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:57.357 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:57.357 12:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:23:58.293 12:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:23:58.293 12:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:58.293 12:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:58.293 12:27:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:58.551 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:23:58.809 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:23:58.809 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:23:58.809 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:58.809 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:58.809 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:58.809 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:58.810 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:23:58.810 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:23:58.810 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:23:58.810 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:58.810 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:58.810 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:58.810 12:27:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:23:59.746 12:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:23:59.746 12:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:59.746 12:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:59.746 12:27:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 103903 00:24:02.280 00:24:02.280 job0: (groupid=0, jobs=1): err= 0: pid=103924: Thu Dec 5 12:27:32 2024 00:24:02.280 read: IOPS=13.3k, BW=52.1MiB/s (54.6MB/s)(313MiB/6001msec) 00:24:02.280 slat (nsec): min=1768, max=5728.7k, avg=37289.63, stdev=201223.19 00:24:02.280 clat (usec): min=322, max=18701, avg=6522.17, stdev=1836.79 00:24:02.280 lat (usec): min=345, max=18711, avg=6559.46, stdev=1845.55 00:24:02.280 clat percentiles (usec): 00:24:02.280 | 1.00th=[ 2245], 5.00th=[ 3687], 10.00th=[ 4359], 20.00th=[ 5276], 00:24:02.280 | 30.00th=[ 5866], 40.00th=[ 6259], 50.00th=[ 6456], 60.00th=[ 6783], 00:24:02.280 | 70.00th=[ 7046], 80.00th=[ 7570], 90.00th=[ 8455], 95.00th=[ 9634], 00:24:02.280 | 99.00th=[12518], 99.50th=[14091], 99.90th=[16909], 99.95th=[17433], 00:24:02.280 | 99.99th=[17957] 00:24:02.280 bw ( KiB/s): min=13920, max=37760, per=52.02%, avg=27742.27, stdev=6995.70, samples=11 00:24:02.280 iops : min= 3480, max= 9440, avg=6935.55, stdev=1748.92, samples=11 00:24:02.280 write: IOPS=7906, BW=30.9MiB/s (32.4MB/s)(160MiB/5178msec); 0 zone resets 00:24:02.280 slat (usec): min=4, max=3394, avg=49.63, stdev=115.75 00:24:02.280 clat (usec): min=400, max=16482, avg=5851.87, stdev=1664.35 00:24:02.280 lat (usec): min=433, max=16510, avg=5901.50, stdev=1669.44 00:24:02.280 clat percentiles (usec): 00:24:02.280 | 1.00th=[ 2040], 5.00th=[ 2966], 10.00th=[ 3490], 20.00th=[ 4555], 00:24:02.280 | 30.00th=[ 5473], 40.00th=[ 5800], 50.00th=[ 5997], 60.00th=[ 6259], 00:24:02.280 | 70.00th=[ 6521], 80.00th=[ 6783], 90.00th=[ 7373], 95.00th=[ 8356], 00:24:02.280 | 99.00th=[11076], 99.50th=[11863], 99.90th=[14222], 99.95th=[15139], 00:24:02.280 | 99.99th=[16057] 00:24:02.280 bw ( KiB/s): min=14448, max=36888, per=87.67%, avg=27729.27, stdev=6735.35, samples=11 00:24:02.280 iops : min= 3612, max= 9222, avg=6932.27, stdev=1683.83, samples=11 00:24:02.280 lat (usec) : 500=0.02%, 750=0.05%, 1000=0.09% 00:24:02.280 lat (msec) : 2=0.60%, 4=9.08%, 10=86.73%, 20=3.43% 00:24:02.280 cpu : usr=6.21%, sys=24.18%, ctx=9842, majf=0, minf=114 00:24:02.280 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:02.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:02.280 issued rwts: total=80007,40942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.280 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:02.280 00:24:02.280 Run status group 0 (all jobs): 00:24:02.280 READ: bw=52.1MiB/s (54.6MB/s), 52.1MiB/s-52.1MiB/s (54.6MB/s-54.6MB/s), io=313MiB (328MB), run=6001-6001msec 00:24:02.280 WRITE: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=160MiB (168MB), run=5178-5178msec 00:24:02.280 00:24:02.280 Disk stats (read/write): 00:24:02.280 nvme0n1: ios=79307/39849, merge=0/0, ticks=481371/218923, in_queue=700294, util=98.67% 00:24:02.280 12:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:02.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:24:02.280 12:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:02.280 12:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:24:02.280 12:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:02.280 12:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:02.280 12:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:02.280 12:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:02.280 12:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:24:02.280 12:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.280 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:24:02.280 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:24:02.280 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:24:02.280 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:24:02.280 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:02.280 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:24:02.280 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:02.280 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:24:02.280 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:02.280 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:02.280 rmmod nvme_tcp 00:24:02.280 rmmod nvme_fabrics 00:24:02.280 rmmod nvme_keyring 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 103634 ']' 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 103634 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 103634 ']' 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 103634 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103634 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:02.539 killing process with pid 103634 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103634' 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 103634 00:24:02.539 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 103634 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:24:02.798 00:24:02.798 real 0m20.068s 00:24:02.798 user 1m10.386s 00:24:02.798 sys 0m8.340s 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:02.798 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:02.798 ************************************ 00:24:02.798 END TEST nvmf_target_multipath 00:24:02.798 ************************************ 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:03.057 ************************************ 00:24:03.057 START TEST nvmf_zcopy 00:24:03.057 ************************************ 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:24:03.057 * Looking for test storage... 00:24:03.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:03.057 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:03.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.058 --rc genhtml_branch_coverage=1 00:24:03.058 --rc genhtml_function_coverage=1 00:24:03.058 --rc genhtml_legend=1 00:24:03.058 --rc geninfo_all_blocks=1 00:24:03.058 --rc geninfo_unexecuted_blocks=1 00:24:03.058 00:24:03.058 ' 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:03.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.058 --rc genhtml_branch_coverage=1 00:24:03.058 --rc genhtml_function_coverage=1 00:24:03.058 --rc genhtml_legend=1 00:24:03.058 --rc geninfo_all_blocks=1 00:24:03.058 --rc geninfo_unexecuted_blocks=1 00:24:03.058 00:24:03.058 ' 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:03.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.058 --rc genhtml_branch_coverage=1 00:24:03.058 --rc genhtml_function_coverage=1 00:24:03.058 --rc genhtml_legend=1 00:24:03.058 --rc geninfo_all_blocks=1 00:24:03.058 --rc geninfo_unexecuted_blocks=1 00:24:03.058 00:24:03.058 ' 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:03.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.058 --rc genhtml_branch_coverage=1 00:24:03.058 --rc genhtml_function_coverage=1 00:24:03.058 --rc genhtml_legend=1 00:24:03.058 --rc geninfo_all_blocks=1 00:24:03.058 --rc geninfo_unexecuted_blocks=1 00:24:03.058 00:24:03.058 ' 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.058 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.317 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:03.318 Cannot find device "nvmf_init_br" 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:03.318 Cannot find device "nvmf_init_br2" 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:03.318 Cannot find device "nvmf_tgt_br" 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:03.318 Cannot find device "nvmf_tgt_br2" 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:24:03.318 12:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:03.318 Cannot find device "nvmf_init_br" 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:03.318 Cannot find device "nvmf_init_br2" 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:03.318 Cannot find device "nvmf_tgt_br" 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:03.318 Cannot find device "nvmf_tgt_br2" 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:03.318 Cannot find device "nvmf_br" 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:03.318 Cannot find device "nvmf_init_if" 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:03.318 Cannot find device "nvmf_init_if2" 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:03.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:03.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:03.318 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:03.577 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:03.577 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:03.577 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:03.577 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:03.577 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:03.577 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:03.577 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:03.578 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:03.578 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:24:03.578 00:24:03.578 --- 10.0.0.3 ping statistics --- 00:24:03.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.578 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:03.578 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:03.578 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:24:03.578 00:24:03.578 --- 10.0.0.4 ping statistics --- 00:24:03.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.578 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:03.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:24:03.578 00:24:03.578 --- 10.0.0.1 ping statistics --- 00:24:03.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.578 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:03.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:24:03.578 00:24:03.578 --- 10.0.0.2 ping statistics --- 00:24:03.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.578 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=104251 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 104251 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 104251 ']' 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.578 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:03.578 [2024-12-05 12:27:34.389844] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:03.578 [2024-12-05 12:27:34.392044] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:24:03.578 [2024-12-05 12:27:34.392160] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.838 [2024-12-05 12:27:34.547248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.838 [2024-12-05 12:27:34.608965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.838 [2024-12-05 12:27:34.609041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.838 [2024-12-05 12:27:34.609066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.838 [2024-12-05 12:27:34.609077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.838 [2024-12-05 12:27:34.609086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.838 [2024-12-05 12:27:34.609588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.098 [2024-12-05 12:27:34.717991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:04.098 [2024-12-05 12:27:34.718394] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:04.098 [2024-12-05 12:27:34.806524] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:04.098 [2024-12-05 12:27:34.826671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:04.098 malloc0 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:04.098 { 00:24:04.098 "params": { 00:24:04.098 "name": "Nvme$subsystem", 00:24:04.098 "trtype": "$TEST_TRANSPORT", 00:24:04.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.098 "adrfam": "ipv4", 00:24:04.098 "trsvcid": "$NVMF_PORT", 00:24:04.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.098 "hdgst": ${hdgst:-false}, 00:24:04.098 "ddgst": ${ddgst:-false} 00:24:04.098 }, 00:24:04.098 "method": "bdev_nvme_attach_controller" 00:24:04.098 } 00:24:04.098 EOF 00:24:04.098 )") 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:24:04.098 12:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:04.098 "params": { 00:24:04.098 "name": "Nvme1", 00:24:04.098 "trtype": "tcp", 00:24:04.098 "traddr": "10.0.0.3", 00:24:04.098 "adrfam": "ipv4", 00:24:04.098 "trsvcid": "4420", 00:24:04.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:04.098 "hdgst": false, 00:24:04.098 "ddgst": false 00:24:04.098 }, 00:24:04.098 "method": "bdev_nvme_attach_controller" 00:24:04.098 }' 00:24:04.098 [2024-12-05 12:27:34.931268] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:24:04.099 [2024-12-05 12:27:34.931389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104294 ] 00:24:04.359 [2024-12-05 12:27:35.082500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.359 [2024-12-05 12:27:35.136917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.617 Running I/O for 10 seconds... 00:24:06.487 6739.00 IOPS, 52.65 MiB/s [2024-12-05T12:27:38.771Z] 6827.00 IOPS, 53.34 MiB/s [2024-12-05T12:27:39.336Z] 6855.33 IOPS, 53.56 MiB/s [2024-12-05T12:27:40.712Z] 6880.75 IOPS, 53.76 MiB/s [2024-12-05T12:27:41.647Z] 6869.60 IOPS, 53.67 MiB/s [2024-12-05T12:27:42.582Z] 6882.17 IOPS, 53.77 MiB/s [2024-12-05T12:27:43.518Z] 6890.43 IOPS, 53.83 MiB/s [2024-12-05T12:27:44.465Z] 6899.25 IOPS, 53.90 MiB/s [2024-12-05T12:27:45.403Z] 6906.11 IOPS, 53.95 MiB/s [2024-12-05T12:27:45.403Z] 6902.30 IOPS, 53.92 MiB/s 00:24:14.534 Latency(us) 00:24:14.534 [2024-12-05T12:27:45.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.534 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:24:14.534 Verification LBA range: start 0x0 length 0x1000 00:24:14.534 Nvme1n1 : 10.01 6903.84 53.94 0.00 0.00 18484.13 569.72 31933.91 00:24:14.534 [2024-12-05T12:27:45.403Z] =================================================================================================================== 00:24:14.534 [2024-12-05T12:27:45.403Z] Total : 6903.84 53.94 0.00 0.00 18484.13 569.72 31933.91 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=104401 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:14.794 { 00:24:14.794 "params": { 00:24:14.794 "name": "Nvme$subsystem", 00:24:14.794 "trtype": "$TEST_TRANSPORT", 00:24:14.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.794 "adrfam": "ipv4", 00:24:14.794 "trsvcid": "$NVMF_PORT", 00:24:14.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.794 "hdgst": ${hdgst:-false}, 00:24:14.794 "ddgst": ${ddgst:-false} 00:24:14.794 }, 00:24:14.794 "method": "bdev_nvme_attach_controller" 00:24:14.794 } 00:24:14.794 EOF 00:24:14.794 )") 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:24:14.794 [2024-12-05 12:27:45.590274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:14.794 [2024-12-05 12:27:45.590378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:24:14.794 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:24:14.794 12:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:14.794 "params": { 00:24:14.794 "name": "Nvme1", 00:24:14.794 "trtype": "tcp", 00:24:14.794 "traddr": "10.0.0.3", 00:24:14.794 "adrfam": "ipv4", 00:24:14.794 "trsvcid": "4420", 00:24:14.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.794 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.794 "hdgst": false, 00:24:14.794 "ddgst": false 00:24:14.794 }, 00:24:14.794 "method": "bdev_nvme_attach_controller" 00:24:14.794 }' 00:24:14.794 [2024-12-05 12:27:45.602222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:14.794 [2024-12-05 12:27:45.602264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:14.794 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:14.794 [2024-12-05 12:27:45.614173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:14.794 [2024-12-05 12:27:45.614212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:14.794 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:14.794 [2024-12-05 12:27:45.626159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:14.794 [2024-12-05 12:27:45.626198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:14.795 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:14.795 [2024-12-05 12:27:45.638157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:14.795 [2024-12-05 12:27:45.638180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:14.795 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:14.795 [2024-12-05 12:27:45.648368] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:24:14.795 [2024-12-05 12:27:45.648465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104401 ] 00:24:14.795 [2024-12-05 12:27:45.650185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:14.795 [2024-12-05 12:27:45.650221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:14.795 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.662185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.662205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.674168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.674189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.686137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.686174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.698137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.698174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.710135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.710173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.722135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.722172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.734179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.734202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.746184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.746222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.758197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.758216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.770182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.770204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.782135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.782156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.791971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.055 [2024-12-05 12:27:45.794207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.794245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.806167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.806205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.055 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.055 [2024-12-05 12:27:45.818195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.055 [2024-12-05 12:27:45.818217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.056 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.056 [2024-12-05 12:27:45.830183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.056 [2024-12-05 12:27:45.830221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.056 [2024-12-05 12:27:45.833644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.056 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.056 [2024-12-05 12:27:45.842216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.056 [2024-12-05 12:27:45.842239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.056 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.056 [2024-12-05 12:27:45.850193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.056 [2024-12-05 12:27:45.850231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.056 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.056 [2024-12-05 12:27:45.862180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.056 [2024-12-05 12:27:45.862219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.056 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.056 [2024-12-05 12:27:45.870134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.056 [2024-12-05 12:27:45.870171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.056 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.056 [2024-12-05 12:27:45.878165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.056 [2024-12-05 12:27:45.878187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.056 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.056 [2024-12-05 12:27:45.886206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.056 [2024-12-05 12:27:45.886227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.056 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.056 [2024-12-05 12:27:45.894163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.056 [2024-12-05 12:27:45.894200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.056 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.056 [2024-12-05 12:27:45.902153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.056 [2024-12-05 12:27:45.902191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.056 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.056 [2024-12-05 12:27:45.910176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.056 [2024-12-05 12:27:45.910198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.056 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.056 [2024-12-05 12:27:45.918163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.056 [2024-12-05 12:27:45.918182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.316 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.316 [2024-12-05 12:27:45.926149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.316 [2024-12-05 12:27:45.926186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.316 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.316 [2024-12-05 12:27:45.934130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.316 [2024-12-05 12:27:45.934153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.316 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.316 [2024-12-05 12:27:45.942132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.316 [2024-12-05 12:27:45.942169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.316 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.316 [2024-12-05 12:27:45.950149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:45.950185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:45.958176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:45.958194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:45.966172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:45.966214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:45.974161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:45.974203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:45.982186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:45.982227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:45.990185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:45.990226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.002207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.002249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.010263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.010345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.022937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.022985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.030164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.030206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 Running I/O for 5 seconds... 00:24:15.317 [2024-12-05 12:27:46.043003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.043049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.061984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.062030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.072937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.072982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.088070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.088116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.105235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.105280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.119586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.119636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.138165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.138195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.147906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.147935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.161995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.162025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.317 [2024-12-05 12:27:46.171375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.317 [2024-12-05 12:27:46.171421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.317 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.186512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.577 [2024-12-05 12:27:46.186557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.577 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.205294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.577 [2024-12-05 12:27:46.205338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.577 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.217135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.577 [2024-12-05 12:27:46.217165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.577 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.230507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.577 [2024-12-05 12:27:46.230553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.577 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.249411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.577 [2024-12-05 12:27:46.249455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.577 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.263756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.577 [2024-12-05 12:27:46.263785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.577 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.281164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.577 [2024-12-05 12:27:46.281210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.577 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.295359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.577 [2024-12-05 12:27:46.295404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.577 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.313206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.577 [2024-12-05 12:27:46.313235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.577 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.326788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.577 [2024-12-05 12:27:46.326817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.577 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.345648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.577 [2024-12-05 12:27:46.345678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.577 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.358206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.577 [2024-12-05 12:27:46.358235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.577 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.577 [2024-12-05 12:27:46.367004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.578 [2024-12-05 12:27:46.367032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.578 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.578 [2024-12-05 12:27:46.382000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.578 [2024-12-05 12:27:46.382046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.578 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.578 [2024-12-05 12:27:46.391311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.578 [2024-12-05 12:27:46.391355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.578 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.578 [2024-12-05 12:27:46.406245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.578 [2024-12-05 12:27:46.406290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.578 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.578 [2024-12-05 12:27:46.414878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.578 [2024-12-05 12:27:46.414907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.578 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.578 [2024-12-05 12:27:46.430029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.578 [2024-12-05 12:27:46.430065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.578 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.578 [2024-12-05 12:27:46.441686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.578 [2024-12-05 12:27:46.441715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.850 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.850 [2024-12-05 12:27:46.451068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.850 [2024-12-05 12:27:46.451098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.850 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.850 [2024-12-05 12:27:46.466590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.850 [2024-12-05 12:27:46.466636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.850 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.850 [2024-12-05 12:27:46.485691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.850 [2024-12-05 12:27:46.485722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.850 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.850 [2024-12-05 12:27:46.499468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.850 [2024-12-05 12:27:46.499514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.850 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.850 [2024-12-05 12:27:46.517286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.850 [2024-12-05 12:27:46.517316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.850 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.850 [2024-12-05 12:27:46.529917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.850 [2024-12-05 12:27:46.529946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.850 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.850 [2024-12-05 12:27:46.540826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.540855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.851 [2024-12-05 12:27:46.556792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.556821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.851 [2024-12-05 12:27:46.570007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.570037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.851 [2024-12-05 12:27:46.579353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.579379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.851 [2024-12-05 12:27:46.594230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.594259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.851 [2024-12-05 12:27:46.603210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.603238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.851 [2024-12-05 12:27:46.618359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.618404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.851 [2024-12-05 12:27:46.627389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.627434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.851 [2024-12-05 12:27:46.642417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.642462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.851 [2024-12-05 12:27:46.661052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.661082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.851 [2024-12-05 12:27:46.675926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.675957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.851 [2024-12-05 12:27:46.693306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.693334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:15.851 [2024-12-05 12:27:46.706575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:15.851 [2024-12-05 12:27:46.706605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:15.851 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.725347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.725392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.736771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.736801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.751699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.751745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.770436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.770482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.780999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.781028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.794874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.794905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.813769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.813799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.826568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.826615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.845727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.845758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.857408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.857453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.871015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.871045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.889574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.889603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.901083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.901112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.110 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.110 [2024-12-05 12:27:46.916910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.110 [2024-12-05 12:27:46.916940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.111 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.111 [2024-12-05 12:27:46.932923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.111 [2024-12-05 12:27:46.932954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.111 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.111 [2024-12-05 12:27:46.947332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.111 [2024-12-05 12:27:46.947379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.111 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.111 [2024-12-05 12:27:46.965194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.111 [2024-12-05 12:27:46.965224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.111 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.369 [2024-12-05 12:27:46.979661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.369 [2024-12-05 12:27:46.979706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.369 2024/12/05 12:27:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.369 [2024-12-05 12:27:46.997347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:46.997375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.010505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.010550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.028655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.028685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 12889.00 IOPS, 100.70 MiB/s [2024-12-05T12:27:47.239Z] [2024-12-05 12:27:47.043452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.043498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.061478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.061506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.073179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.073208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.087549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.087595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.105354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.105382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.116707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.116738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.133470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.133517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.143750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.143797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.159062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.159108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.177821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.177868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.188321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.188376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.203714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.203767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.221340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.221385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.370 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.370 [2024-12-05 12:27:47.234569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.370 [2024-12-05 12:27:47.234614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.629 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.629 [2024-12-05 12:27:47.252905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.629 [2024-12-05 12:27:47.252951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.629 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.629 [2024-12-05 12:27:47.267446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.629 [2024-12-05 12:27:47.267493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.629 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.629 [2024-12-05 12:27:47.285990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.629 [2024-12-05 12:27:47.286037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.629 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.629 [2024-12-05 12:27:47.297701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.629 [2024-12-05 12:27:47.297747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.629 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.629 [2024-12-05 12:27:47.306999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.629 [2024-12-05 12:27:47.307045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.629 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.629 [2024-12-05 12:27:47.322769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.629 [2024-12-05 12:27:47.322816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.629 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.629 [2024-12-05 12:27:47.341741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.629 [2024-12-05 12:27:47.341787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.629 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.629 [2024-12-05 12:27:47.355915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.629 [2024-12-05 12:27:47.355961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.630 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.630 [2024-12-05 12:27:47.373538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.630 [2024-12-05 12:27:47.373585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.630 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.630 [2024-12-05 12:27:47.383031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.630 [2024-12-05 12:27:47.383078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.630 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.630 [2024-12-05 12:27:47.398529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.630 [2024-12-05 12:27:47.398575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.630 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.630 [2024-12-05 12:27:47.417899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.630 [2024-12-05 12:27:47.417945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.630 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.630 [2024-12-05 12:27:47.431574] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.630 [2024-12-05 12:27:47.431620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.630 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.630 [2024-12-05 12:27:47.449257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.630 [2024-12-05 12:27:47.449299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.630 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.630 [2024-12-05 12:27:47.461023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.630 [2024-12-05 12:27:47.461052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.630 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.630 [2024-12-05 12:27:47.477022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.630 [2024-12-05 12:27:47.477052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.630 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.630 [2024-12-05 12:27:47.490546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.630 [2024-12-05 12:27:47.490591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.630 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.508929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.508959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.521576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.521606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.534555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.534600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.553099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.553130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.566695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.566725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.585743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.585773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.597329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.597374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.610897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.610924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.629649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.629680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.643645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.643675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.661806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.661834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.671809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.671838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.686341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.686385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.695176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.695207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.709875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.709905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.722529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.722559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.741538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.741584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:16.889 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:16.889 [2024-12-05 12:27:47.753725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:16.889 [2024-12-05 12:27:47.753770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.148 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.148 [2024-12-05 12:27:47.762959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.148 [2024-12-05 12:27:47.763004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.148 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.148 [2024-12-05 12:27:47.778551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.148 [2024-12-05 12:27:47.778579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.148 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.148 [2024-12-05 12:27:47.798383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.148 [2024-12-05 12:27:47.798430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.148 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.148 [2024-12-05 12:27:47.808820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.148 [2024-12-05 12:27:47.808847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.148 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:47.823581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:47.823613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:47.840572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:47.840603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:47.857638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:47.857667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:47.869262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:47.869316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:47.883252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:47.883321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:47.901388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:47.901415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:47.912809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:47.912838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:47.928717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:47.928746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:47.944706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:47.944736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:47.960739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:47.960768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:47.976673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:47.976703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:47.992727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:47.992756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.149 [2024-12-05 12:27:48.008559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.149 [2024-12-05 12:27:48.008589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.149 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.025551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.025581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.034943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.034989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 12802.00 IOPS, 100.02 MiB/s [2024-12-05T12:27:48.278Z] 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.047461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.047507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.065511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.065541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.078524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.078570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.097039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.097069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.109364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.109409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.124719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.124750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.140111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.140156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.156836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.156866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.169438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.169483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.182552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.182582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.200955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.200986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.215174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.215203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.233269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.233325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.245886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.245914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.409 [2024-12-05 12:27:48.258685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.409 [2024-12-05 12:27:48.258715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.409 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.670 [2024-12-05 12:27:48.277620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.670 [2024-12-05 12:27:48.277666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.670 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.670 [2024-12-05 12:27:48.289427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.670 [2024-12-05 12:27:48.289455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.670 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.670 [2024-12-05 12:27:48.302774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.670 [2024-12-05 12:27:48.302803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.670 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.670 [2024-12-05 12:27:48.321056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.670 [2024-12-05 12:27:48.321085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.670 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.670 [2024-12-05 12:27:48.334998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.670 [2024-12-05 12:27:48.335028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.670 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.670 [2024-12-05 12:27:48.354154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.670 [2024-12-05 12:27:48.354182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.670 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.670 [2024-12-05 12:27:48.366616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.670 [2024-12-05 12:27:48.366662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.670 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.670 [2024-12-05 12:27:48.385111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.670 [2024-12-05 12:27:48.385143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.670 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.670 [2024-12-05 12:27:48.397228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.670 [2024-12-05 12:27:48.397274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.670 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.670 [2024-12-05 12:27:48.407163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.670 [2024-12-05 12:27:48.407210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.670 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.670 [2024-12-05 12:27:48.423745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.670 [2024-12-05 12:27:48.423792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.670 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.671 [2024-12-05 12:27:48.440891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.671 [2024-12-05 12:27:48.440939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.671 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.671 [2024-12-05 12:27:48.455771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.671 [2024-12-05 12:27:48.455819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.671 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.671 [2024-12-05 12:27:48.474145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.671 [2024-12-05 12:27:48.474191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.671 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.671 [2024-12-05 12:27:48.484671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.671 [2024-12-05 12:27:48.484717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.671 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.671 [2024-12-05 12:27:48.497885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.671 [2024-12-05 12:27:48.497931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.671 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.671 [2024-12-05 12:27:48.507456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.671 [2024-12-05 12:27:48.507503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.671 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.671 [2024-12-05 12:27:48.521994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.671 [2024-12-05 12:27:48.522041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.671 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.671 [2024-12-05 12:27:48.531440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.671 [2024-12-05 12:27:48.531485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.671 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.545493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.545539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.558738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.558784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.578349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.578397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.587993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.588040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.601819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.601866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.611909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.611952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.626622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.626669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.646233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.646280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.656784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.656830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.669286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.669341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.679182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.679228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.695264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.695331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.713160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.713190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.726777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.726807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.745163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.745193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.756846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.756875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.773275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.773316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:17.931 [2024-12-05 12:27:48.787709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:17.931 [2024-12-05 12:27:48.787740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:17.931 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:48.807224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:48.807337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:48.821909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:48.821956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:48.842691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:48.842720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:48.861534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:48.861564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:48.881270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:48.881310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:48.895426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:48.895472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:48.913134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:48.913164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:48.928191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:48.928221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:48.944966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:48.944996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:48.960918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:48.960948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:48.974895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:48.974924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:48.993514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:48.993543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:49.005022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:49.005049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.191 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.191 [2024-12-05 12:27:49.019733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.191 [2024-12-05 12:27:49.019762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.192 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.192 [2024-12-05 12:27:49.037392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.192 [2024-12-05 12:27:49.037422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.192 12753.67 IOPS, 99.64 MiB/s [2024-12-05T12:27:49.061Z] 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.192 [2024-12-05 12:27:49.049051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.192 [2024-12-05 12:27:49.049081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.192 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.451 [2024-12-05 12:27:49.063388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.451 [2024-12-05 12:27:49.063434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.451 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.081305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.081334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.093071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.093100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.107445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.107492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.125232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.125262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.138542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.138588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.157492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.157522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.168800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.168829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.184718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.184749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.200708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.200738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.215812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.215842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.233982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.234013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.245369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.245399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.259141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.259170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.277370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.277399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.290717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.290747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.452 [2024-12-05 12:27:49.310087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.452 [2024-12-05 12:27:49.310113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.452 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.712 [2024-12-05 12:27:49.320234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.712 [2024-12-05 12:27:49.320263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.334569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.334597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.353200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.353231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.365716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.365746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.379319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.379364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.397605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.397635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.409313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.409357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.424995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.425025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.441064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.441094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.454694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.454723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.473449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.473479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.486531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.486577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.505146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.505177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.519484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.519530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.537587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.537617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.550509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.550555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.713 [2024-12-05 12:27:49.568965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.713 [2024-12-05 12:27:49.568994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.713 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.583414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.583458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.600960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.600990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.615605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.615637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.632998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.633026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.648992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.649038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.661377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.661423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.671475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.671524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.687725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.687771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.705497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.705546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.717320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.717365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.731790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.731836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.750198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.750244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.760268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.760342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.773284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.773340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.783359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.783405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.798027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.798078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.807998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.808044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:18.973 [2024-12-05 12:27:49.822823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:18.973 [2024-12-05 12:27:49.822868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:18.973 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:49.842653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:49.842715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:49.859213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:49.859258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:49.877602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:49.877648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:49.887364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:49.887410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:49.902331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:49.902377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:49.912048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:49.912077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:49.926259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:49.926297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:49.935089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:49.935118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:49.950654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:49.950684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:49.969327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:49.969356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:49.980814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:49.980844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:49.996624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:49.996653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:50.011214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:50.011261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:50.031331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:50.031379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 12741.00 IOPS, 99.54 MiB/s [2024-12-05T12:27:50.102Z] [2024-12-05 12:27:50.048047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:50.048093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:50.065998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:50.066044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:50.076064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:50.076109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:50.090047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:50.090097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.233 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.233 [2024-12-05 12:27:50.099561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.233 [2024-12-05 12:27:50.099608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.113639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.113668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.125992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.126022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.138649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.138694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.157267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.157307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.171162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.171192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.189421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.189449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.203189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.203218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.221358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.221386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.232632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.232660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.248844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.248872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.264731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.264762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.279562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.279607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.297232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.297259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.311436] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.311482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.493 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.493 [2024-12-05 12:27:50.330104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.493 [2024-12-05 12:27:50.330134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.494 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.494 [2024-12-05 12:27:50.341332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.494 [2024-12-05 12:27:50.341360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.494 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.494 [2024-12-05 12:27:50.355049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.494 [2024-12-05 12:27:50.355080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.494 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.373078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.373108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.752 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.387472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.387503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.752 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.405588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.405634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.752 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.417364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.417393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.752 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.431227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.431256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.752 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.448966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.448996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.752 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.463384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.463412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.752 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.481299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.481327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.752 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.494813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.494843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.752 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.513248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.513289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.752 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.526045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.526079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.752 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.535280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.535364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.752 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.752 [2024-12-05 12:27:50.547890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.752 [2024-12-05 12:27:50.547921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.753 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.753 [2024-12-05 12:27:50.565922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.753 [2024-12-05 12:27:50.565952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.753 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.753 [2024-12-05 12:27:50.578369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.753 [2024-12-05 12:27:50.578414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.753 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.753 [2024-12-05 12:27:50.597468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.753 [2024-12-05 12:27:50.597514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.753 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.753 [2024-12-05 12:27:50.609411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.753 [2024-12-05 12:27:50.609457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:19.753 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:19.753 [2024-12-05 12:27:50.619342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:19.753 [2024-12-05 12:27:50.619388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.011 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.011 [2024-12-05 12:27:50.634503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.011 [2024-12-05 12:27:50.634547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.011 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.011 [2024-12-05 12:27:50.653589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.011 [2024-12-05 12:27:50.653619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.011 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.011 [2024-12-05 12:27:50.665579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.011 [2024-12-05 12:27:50.665609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.011 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.011 [2024-12-05 12:27:50.679821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.011 [2024-12-05 12:27:50.679850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.011 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.011 [2024-12-05 12:27:50.698103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.011 [2024-12-05 12:27:50.698133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.011 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.011 [2024-12-05 12:27:50.707760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.011 [2024-12-05 12:27:50.707790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.011 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.011 [2024-12-05 12:27:50.725309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.011 [2024-12-05 12:27:50.725337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.011 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.011 [2024-12-05 12:27:50.738907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.011 [2024-12-05 12:27:50.738937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.011 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.011 [2024-12-05 12:27:50.756898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.011 [2024-12-05 12:27:50.756928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.011 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.011 [2024-12-05 12:27:50.772757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.011 [2024-12-05 12:27:50.772787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.011 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.012 [2024-12-05 12:27:50.788803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.012 [2024-12-05 12:27:50.788833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.012 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.012 [2024-12-05 12:27:50.803658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.012 [2024-12-05 12:27:50.803689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.012 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.012 [2024-12-05 12:27:50.821575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.012 [2024-12-05 12:27:50.821604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.012 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.012 [2024-12-05 12:27:50.834705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.012 [2024-12-05 12:27:50.834734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.012 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.012 [2024-12-05 12:27:50.854003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.012 [2024-12-05 12:27:50.854050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.012 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.012 [2024-12-05 12:27:50.864707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.012 [2024-12-05 12:27:50.864753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.012 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.271 [2024-12-05 12:27:50.879213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.271 [2024-12-05 12:27:50.879243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.271 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.271 [2024-12-05 12:27:50.898810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.271 [2024-12-05 12:27:50.898840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.271 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.271 [2024-12-05 12:27:50.917332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.271 [2024-12-05 12:27:50.917377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.271 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.271 [2024-12-05 12:27:50.931349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.271 [2024-12-05 12:27:50.931396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:50.949405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:50.949451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:50.959500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:50.959547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:50.974971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:50.975016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:50.994060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:50.994110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.006629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.006674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.026186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.026231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.036646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.036690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 12754.80 IOPS, 99.65 MiB/s [2024-12-05T12:27:51.141Z] 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.046182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.046237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 00:24:20.272 Latency(us) 00:24:20.272 [2024-12-05T12:27:51.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.272 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:24:20.272 Nvme1n1 : 5.01 12757.34 99.67 0.00 0.00 10022.15 2487.39 18707.55 00:24:20.272 [2024-12-05T12:27:51.141Z] =================================================================================================================== 00:24:20.272 [2024-12-05T12:27:51.141Z] Total : 12757.34 99.67 0.00 0.00 10022.15 2487.39 18707.55 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.058178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.058220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.066177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.066214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.074218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.074244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.086167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.086204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.094163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.094185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.102160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.102182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.110132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.110169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.118152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.118174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.126271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.126336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.272 [2024-12-05 12:27:51.134150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.272 [2024-12-05 12:27:51.134188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.272 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.142148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.142169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.150176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.150197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.158133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.158169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.166151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.166172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.174177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.174214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.182178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.182199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.194184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.194221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.202162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.202182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.214131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.214152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.222143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.222180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.230142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.230178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.238134] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.238170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.246144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.246164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.254130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.254167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.262144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.262164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.270142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.270163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.278141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.278177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 [2024-12-05 12:27:51.290182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:20.532 [2024-12-05 12:27:51.290204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:20.532 2024/12/05 12:27:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:20.532 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (104401) - No such process 00:24:20.532 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 104401 00:24:20.532 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:20.533 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.533 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:20.533 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.533 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:24:20.533 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.533 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:20.533 delay0 00:24:20.533 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.533 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:24:20.533 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.533 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:20.533 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.533 12:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:24:20.803 [2024-12-05 12:27:51.490453] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:24:29.059 Initializing NVMe Controllers 00:24:29.059 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.059 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:29.059 Initialization complete. Launching workers. 00:24:29.059 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 240, failed: 27170 00:24:29.059 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27287, failed to submit 123 00:24:29.059 success 27218, unsuccessful 69, failed 0 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.059 rmmod nvme_tcp 00:24:29.059 rmmod nvme_fabrics 00:24:29.059 rmmod nvme_keyring 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 104251 ']' 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 104251 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 104251 ']' 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 104251 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104251 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:29.059 killing process with pid 104251 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104251' 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 104251 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 104251 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:29.059 12:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:24:29.059 00:24:29.059 real 0m25.375s 00:24:29.059 user 0m37.453s 00:24:29.059 sys 0m9.497s 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:24:29.059 ************************************ 00:24:29.059 END TEST nvmf_zcopy 00:24:29.059 ************************************ 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:29.059 ************************************ 00:24:29.059 START TEST nvmf_nmic 00:24:29.059 ************************************ 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:24:29.059 * Looking for test storage... 00:24:29.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.059 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:29.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.060 --rc genhtml_branch_coverage=1 00:24:29.060 --rc genhtml_function_coverage=1 00:24:29.060 --rc genhtml_legend=1 00:24:29.060 --rc geninfo_all_blocks=1 00:24:29.060 --rc geninfo_unexecuted_blocks=1 00:24:29.060 00:24:29.060 ' 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:29.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.060 --rc genhtml_branch_coverage=1 00:24:29.060 --rc genhtml_function_coverage=1 00:24:29.060 --rc genhtml_legend=1 00:24:29.060 --rc geninfo_all_blocks=1 00:24:29.060 --rc geninfo_unexecuted_blocks=1 00:24:29.060 00:24:29.060 ' 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:29.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.060 --rc genhtml_branch_coverage=1 00:24:29.060 --rc genhtml_function_coverage=1 00:24:29.060 --rc genhtml_legend=1 00:24:29.060 --rc geninfo_all_blocks=1 00:24:29.060 --rc geninfo_unexecuted_blocks=1 00:24:29.060 00:24:29.060 ' 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:29.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.060 --rc genhtml_branch_coverage=1 00:24:29.060 --rc genhtml_function_coverage=1 00:24:29.060 --rc genhtml_legend=1 00:24:29.060 --rc geninfo_all_blocks=1 00:24:29.060 --rc geninfo_unexecuted_blocks=1 00:24:29.060 00:24:29.060 ' 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:29.060 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:29.061 Cannot find device "nvmf_init_br" 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:29.061 Cannot find device "nvmf_init_br2" 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:29.061 Cannot find device "nvmf_tgt_br" 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:29.061 Cannot find device "nvmf_tgt_br2" 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:29.061 Cannot find device "nvmf_init_br" 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:29.061 Cannot find device "nvmf_init_br2" 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:29.061 Cannot find device "nvmf_tgt_br" 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:29.061 Cannot find device "nvmf_tgt_br2" 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:29.061 Cannot find device "nvmf_br" 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:29.061 Cannot find device "nvmf_init_if" 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:29.061 Cannot find device "nvmf_init_if2" 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:29.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:29.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:29.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:29.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:24:29.061 00:24:29.061 --- 10.0.0.3 ping statistics --- 00:24:29.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.061 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:29.061 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:29.061 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:24:29.061 00:24:29.061 --- 10.0.0.4 ping statistics --- 00:24:29.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.061 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:29.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:29.061 00:24:29.061 --- 10.0.0.1 ping statistics --- 00:24:29.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.061 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:29.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:24:29.061 00:24:29.061 --- 10.0.0.2 ping statistics --- 00:24:29.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.061 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:29.061 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:29.062 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=104785 00:24:29.062 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:24:29.062 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 104785 00:24:29.062 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 104785 ']' 00:24:29.062 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.062 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.062 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.062 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.062 12:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:29.062 [2024-12-05 12:27:59.785081] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:29.062 [2024-12-05 12:27:59.786207] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:24:29.062 [2024-12-05 12:27:59.786265] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.321 [2024-12-05 12:27:59.926520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.321 [2024-12-05 12:27:59.977660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.321 [2024-12-05 12:27:59.977718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.321 [2024-12-05 12:27:59.977729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.321 [2024-12-05 12:27:59.977753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.321 [2024-12-05 12:27:59.977760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.321 [2024-12-05 12:27:59.979049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.321 [2024-12-05 12:27:59.979191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.321 [2024-12-05 12:27:59.979367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.321 [2024-12-05 12:27:59.979370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.321 [2024-12-05 12:28:00.076786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:29.321 [2024-12-05 12:28:00.077437] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:29.321 [2024-12-05 12:28:00.077502] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:29.321 [2024-12-05 12:28:00.077847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:29.321 [2024-12-05 12:28:00.078227] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:29.321 [2024-12-05 12:28:00.160433] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.321 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 Malloc0 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 [2024-12-05 12:28:00.232550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:24:29.581 test case1: single bdev can't be used in multiple subsystems 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 [2024-12-05 12:28:00.256327] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:24:29.581 [2024-12-05 12:28:00.256367] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:24:29.581 [2024-12-05 12:28:00.256381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:29.581 2024/12/05 12:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:29.581 request: 00:24:29.581 { 00:24:29.581 "method": "nvmf_subsystem_add_ns", 00:24:29.581 "params": { 00:24:29.581 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:24:29.581 "namespace": { 00:24:29.581 "bdev_name": "Malloc0", 00:24:29.581 "no_auto_visible": false, 00:24:29.581 "hide_metadata": false 00:24:29.581 } 00:24:29.581 } 00:24:29.581 } 00:24:29.581 Got JSON-RPC error response 00:24:29.581 GoRPCClient: error on JSON-RPC call 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:24:29.581 Adding namespace failed - expected result. 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:24:29.581 test case2: host connect to nvmf target in multiple paths 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 [2024-12-05 12:28:00.268422] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:29.581 12:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:24:32.114 12:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:32.114 12:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:32.114 12:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:24:32.114 12:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:32.114 12:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.114 12:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:24:32.114 12:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:24:32.114 [global] 00:24:32.114 thread=1 00:24:32.114 invalidate=1 00:24:32.114 rw=write 00:24:32.114 time_based=1 00:24:32.114 runtime=1 00:24:32.114 ioengine=libaio 00:24:32.114 direct=1 00:24:32.114 bs=4096 00:24:32.114 iodepth=1 00:24:32.114 norandommap=0 00:24:32.114 numjobs=1 00:24:32.114 00:24:32.114 verify_dump=1 00:24:32.114 verify_backlog=512 00:24:32.114 verify_state_save=0 00:24:32.114 do_verify=1 00:24:32.114 verify=crc32c-intel 00:24:32.114 [job0] 00:24:32.114 filename=/dev/nvme0n1 00:24:32.114 Could not set queue depth (nvme0n1) 00:24:32.114 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:32.114 fio-3.35 00:24:32.114 Starting 1 thread 00:24:33.047 00:24:33.048 job0: (groupid=0, jobs=1): err= 0: pid=104877: Thu Dec 5 12:28:03 2024 00:24:33.048 read: IOPS=2676, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:24:33.048 slat (nsec): min=13950, max=86471, avg=16934.51, stdev=6416.18 00:24:33.048 clat (usec): min=150, max=1497, avg=181.55, stdev=31.00 00:24:33.048 lat (usec): min=165, max=1513, avg=198.48, stdev=31.68 00:24:33.048 clat percentiles (usec): 00:24:33.048 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:24:33.048 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:24:33.048 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 219], 00:24:33.048 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 277], 99.95th=[ 351], 00:24:33.048 | 99.99th=[ 1500] 00:24:33.048 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:24:33.048 slat (usec): min=19, max=109, avg=25.94, stdev=10.19 00:24:33.048 clat (usec): min=95, max=2331, avg=123.53, stdev=43.57 00:24:33.048 lat (usec): min=123, max=2369, avg=149.47, stdev=45.39 00:24:33.048 clat percentiles (usec): 00:24:33.048 | 1.00th=[ 105], 5.00th=[ 109], 10.00th=[ 111], 20.00th=[ 113], 00:24:33.048 | 30.00th=[ 115], 40.00th=[ 117], 50.00th=[ 119], 60.00th=[ 121], 00:24:33.048 | 70.00th=[ 124], 80.00th=[ 130], 90.00th=[ 141], 95.00th=[ 155], 00:24:33.048 | 99.00th=[ 188], 99.50th=[ 202], 99.90th=[ 223], 99.95th=[ 570], 00:24:33.048 | 99.99th=[ 2343] 00:24:33.048 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:24:33.048 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:24:33.048 lat (usec) : 100=0.05%, 250=99.69%, 500=0.21%, 750=0.02% 00:24:33.048 lat (msec) : 2=0.02%, 4=0.02% 00:24:33.048 cpu : usr=2.50%, sys=8.40%, ctx=5751, majf=0, minf=5 00:24:33.048 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:33.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:33.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:33.048 issued rwts: total=2679,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:33.048 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:33.048 00:24:33.048 Run status group 0 (all jobs): 00:24:33.048 READ: bw=10.5MiB/s (11.0MB/s), 10.5MiB/s-10.5MiB/s (11.0MB/s-11.0MB/s), io=10.5MiB (11.0MB), run=1001-1001msec 00:24:33.048 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:24:33.048 00:24:33.048 Disk stats (read/write): 00:24:33.048 nvme0n1: ios=2610/2626, merge=0/0, ticks=479/352, in_queue=831, util=91.38% 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:33.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.048 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.048 rmmod nvme_tcp 00:24:33.306 rmmod nvme_fabrics 00:24:33.306 rmmod nvme_keyring 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 104785 ']' 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 104785 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 104785 ']' 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 104785 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104785 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.306 killing process with pid 104785 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104785' 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 104785 00:24:33.306 12:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 104785 00:24:33.563 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:33.563 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:33.563 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:33.563 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:24:33.563 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:24:33.563 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:33.564 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:33.820 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:33.820 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:33.820 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:33.820 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.820 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.820 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.820 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:24:33.820 00:24:33.820 real 0m5.377s 00:24:33.820 user 0m15.057s 00:24:33.820 sys 0m1.810s 00:24:33.820 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:33.820 ************************************ 00:24:33.820 END TEST nvmf_nmic 00:24:33.820 ************************************ 00:24:33.820 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:24:33.820 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:24:33.821 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:33.821 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:33.821 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:33.821 ************************************ 00:24:33.821 START TEST nvmf_fio_target 00:24:33.821 ************************************ 00:24:33.821 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:24:33.821 * Looking for test storage... 00:24:33.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:33.821 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:33.821 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:33.821 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:34.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.080 --rc genhtml_branch_coverage=1 00:24:34.080 --rc genhtml_function_coverage=1 00:24:34.080 --rc genhtml_legend=1 00:24:34.080 --rc geninfo_all_blocks=1 00:24:34.080 --rc geninfo_unexecuted_blocks=1 00:24:34.080 00:24:34.080 ' 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:34.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.080 --rc genhtml_branch_coverage=1 00:24:34.080 --rc genhtml_function_coverage=1 00:24:34.080 --rc genhtml_legend=1 00:24:34.080 --rc geninfo_all_blocks=1 00:24:34.080 --rc geninfo_unexecuted_blocks=1 00:24:34.080 00:24:34.080 ' 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:34.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.080 --rc genhtml_branch_coverage=1 00:24:34.080 --rc genhtml_function_coverage=1 00:24:34.080 --rc genhtml_legend=1 00:24:34.080 --rc geninfo_all_blocks=1 00:24:34.080 --rc geninfo_unexecuted_blocks=1 00:24:34.080 00:24:34.080 ' 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:34.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.080 --rc genhtml_branch_coverage=1 00:24:34.080 --rc genhtml_function_coverage=1 00:24:34.080 --rc genhtml_legend=1 00:24:34.080 --rc geninfo_all_blocks=1 00:24:34.080 --rc geninfo_unexecuted_blocks=1 00:24:34.080 00:24:34.080 ' 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.080 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:34.081 Cannot find device "nvmf_init_br" 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:34.081 Cannot find device "nvmf_init_br2" 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:34.081 Cannot find device "nvmf_tgt_br" 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:34.081 Cannot find device "nvmf_tgt_br2" 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:34.081 Cannot find device "nvmf_init_br" 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:34.081 Cannot find device "nvmf_init_br2" 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:34.081 Cannot find device "nvmf_tgt_br" 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:34.081 Cannot find device "nvmf_tgt_br2" 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:34.081 Cannot find device "nvmf_br" 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:34.081 Cannot find device "nvmf_init_if" 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:34.081 Cannot find device "nvmf_init_if2" 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:34.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:34.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:34.081 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:34.340 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:34.340 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:34.340 12:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:34.340 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:34.340 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:24:34.340 00:24:34.340 --- 10.0.0.3 ping statistics --- 00:24:34.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.340 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:34.340 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:34.341 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:34.341 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:24:34.341 00:24:34.341 --- 10.0.0.4 ping statistics --- 00:24:34.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.341 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:34.341 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:34.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:24:34.341 00:24:34.341 --- 10.0.0.1 ping statistics --- 00:24:34.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.341 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:24:34.341 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:34.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:24:34.341 00:24:34.341 --- 10.0.0.2 ping statistics --- 00:24:34.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.341 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:24:34.341 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.341 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:24:34.341 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:34.341 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.341 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.341 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.341 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.341 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.341 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.599 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:24:34.599 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.599 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.599 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.599 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=105112 00:24:34.600 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:24:34.600 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 105112 00:24:34.600 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 105112 ']' 00:24:34.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.600 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.600 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.600 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.600 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.600 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.600 [2024-12-05 12:28:05.279385] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:34.600 [2024-12-05 12:28:05.280448] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:24:34.600 [2024-12-05 12:28:05.280665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.600 [2024-12-05 12:28:05.429666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:34.857 [2024-12-05 12:28:05.495458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.857 [2024-12-05 12:28:05.495543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.857 [2024-12-05 12:28:05.495558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.857 [2024-12-05 12:28:05.495570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.857 [2024-12-05 12:28:05.495580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.857 [2024-12-05 12:28:05.497113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.857 [2024-12-05 12:28:05.497314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.857 [2024-12-05 12:28:05.497408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.857 [2024-12-05 12:28:05.497411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.857 [2024-12-05 12:28:05.639856] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:34.857 [2024-12-05 12:28:05.640211] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:34.857 [2024-12-05 12:28:05.641145] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:34.857 [2024-12-05 12:28:05.641203] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:34.857 [2024-12-05 12:28:05.641542] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:24:34.857 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.857 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:24:34.857 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:34.857 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.858 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.858 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.858 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:35.115 [2024-12-05 12:28:05.926984] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.115 12:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:35.679 12:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:24:35.679 12:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:35.937 12:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:24:35.937 12:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:36.195 12:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:24:36.195 12:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:36.453 12:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:24:36.453 12:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:24:36.712 12:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:36.970 12:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:24:36.970 12:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:37.246 12:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:24:37.246 12:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:37.505 12:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:24:37.505 12:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:24:37.762 12:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:38.021 12:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:24:38.021 12:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.021 12:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:24:38.021 12:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:38.589 12:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:38.589 [2024-12-05 12:28:09.407136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:38.589 12:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:24:38.849 12:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:24:39.107 12:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:24:39.107 12:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:24:39.107 12:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:24:39.107 12:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:39.107 12:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:24:39.107 12:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:24:39.107 12:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:24:41.639 12:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:41.639 12:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:41.639 12:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:24:41.639 12:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:24:41.639 12:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:41.639 12:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:24:41.639 12:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:24:41.639 [global] 00:24:41.639 thread=1 00:24:41.639 invalidate=1 00:24:41.639 rw=write 00:24:41.639 time_based=1 00:24:41.639 runtime=1 00:24:41.639 ioengine=libaio 00:24:41.639 direct=1 00:24:41.639 bs=4096 00:24:41.639 iodepth=1 00:24:41.639 norandommap=0 00:24:41.639 numjobs=1 00:24:41.639 00:24:41.639 verify_dump=1 00:24:41.639 verify_backlog=512 00:24:41.639 verify_state_save=0 00:24:41.639 do_verify=1 00:24:41.639 verify=crc32c-intel 00:24:41.639 [job0] 00:24:41.639 filename=/dev/nvme0n1 00:24:41.639 [job1] 00:24:41.639 filename=/dev/nvme0n2 00:24:41.639 [job2] 00:24:41.639 filename=/dev/nvme0n3 00:24:41.639 [job3] 00:24:41.639 filename=/dev/nvme0n4 00:24:41.639 Could not set queue depth (nvme0n1) 00:24:41.639 Could not set queue depth (nvme0n2) 00:24:41.639 Could not set queue depth (nvme0n3) 00:24:41.639 Could not set queue depth (nvme0n4) 00:24:41.639 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:41.639 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:41.639 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:41.639 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:41.639 fio-3.35 00:24:41.639 Starting 4 threads 00:24:42.572 00:24:42.572 job0: (groupid=0, jobs=1): err= 0: pid=105380: Thu Dec 5 12:28:13 2024 00:24:42.572 read: IOPS=1937, BW=7748KiB/s (7934kB/s)(7756KiB/1001msec) 00:24:42.572 slat (nsec): min=13345, max=55939, avg=16445.06, stdev=4118.80 00:24:42.572 clat (usec): min=168, max=1959, avg=260.21, stdev=82.46 00:24:42.572 lat (usec): min=183, max=1977, avg=276.65, stdev=84.16 00:24:42.572 clat percentiles (usec): 00:24:42.572 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 208], 00:24:42.572 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 243], 00:24:42.572 | 70.00th=[ 255], 80.00th=[ 318], 90.00th=[ 375], 95.00th=[ 420], 00:24:42.572 | 99.00th=[ 486], 99.50th=[ 519], 99.90th=[ 807], 99.95th=[ 1958], 00:24:42.572 | 99.99th=[ 1958] 00:24:42.572 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:24:42.572 slat (usec): min=19, max=123, avg=27.77, stdev=10.12 00:24:42.572 clat (usec): min=107, max=1119, avg=195.19, stdev=59.26 00:24:42.572 lat (usec): min=129, max=1160, avg=222.96, stdev=65.58 00:24:42.572 clat percentiles (usec): 00:24:42.572 | 1.00th=[ 122], 5.00th=[ 131], 10.00th=[ 137], 20.00th=[ 147], 00:24:42.572 | 30.00th=[ 157], 40.00th=[ 167], 50.00th=[ 178], 60.00th=[ 194], 00:24:42.572 | 70.00th=[ 225], 80.00th=[ 247], 90.00th=[ 273], 95.00th=[ 293], 00:24:42.572 | 99.00th=[ 367], 99.50th=[ 388], 99.90th=[ 502], 99.95th=[ 586], 00:24:42.572 | 99.99th=[ 1123] 00:24:42.572 bw ( KiB/s): min= 8192, max= 8192, per=27.12%, avg=8192.00, stdev= 0.00, samples=1 00:24:42.572 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:24:42.572 lat (usec) : 250=74.67%, 500=24.91%, 750=0.35%, 1000=0.03% 00:24:42.572 lat (msec) : 2=0.05% 00:24:42.572 cpu : usr=2.20%, sys=5.80%, ctx=3988, majf=0, minf=5 00:24:42.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:42.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.572 issued rwts: total=1939,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:42.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:42.572 job1: (groupid=0, jobs=1): err= 0: pid=105381: Thu Dec 5 12:28:13 2024 00:24:42.572 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:24:42.572 slat (nsec): min=12683, max=58505, avg=15504.32, stdev=3467.93 00:24:42.572 clat (usec): min=200, max=568, avg=247.84, stdev=24.30 00:24:42.572 lat (usec): min=218, max=600, avg=263.34, stdev=24.94 00:24:42.572 clat percentiles (usec): 00:24:42.572 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 233], 00:24:42.572 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:24:42.572 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:24:42.572 | 99.00th=[ 322], 99.50th=[ 388], 99.90th=[ 482], 99.95th=[ 529], 00:24:42.572 | 99.99th=[ 570] 00:24:42.572 write: IOPS=2061, BW=8248KiB/s (8446kB/s)(8256KiB/1001msec); 0 zone resets 00:24:42.572 slat (usec): min=17, max=110, avg=24.14, stdev= 7.57 00:24:42.572 clat (usec): min=130, max=1588, avg=195.66, stdev=49.03 00:24:42.572 lat (usec): min=149, max=1640, avg=219.81, stdev=51.08 00:24:42.572 clat percentiles (usec): 00:24:42.572 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:24:42.572 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:24:42.572 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 223], 95.00th=[ 239], 00:24:42.572 | 99.00th=[ 289], 99.50th=[ 334], 99.90th=[ 652], 99.95th=[ 1352], 00:24:42.572 | 99.99th=[ 1582] 00:24:42.572 bw ( KiB/s): min= 8192, max= 8192, per=27.12%, avg=8192.00, stdev= 0.00, samples=1 00:24:42.572 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:24:42.572 lat (usec) : 250=80.28%, 500=19.55%, 750=0.12% 00:24:42.572 lat (msec) : 2=0.05% 00:24:42.572 cpu : usr=1.90%, sys=5.70%, ctx=4113, majf=0, minf=5 00:24:42.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:42.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.572 issued rwts: total=2048,2064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:42.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:42.572 job2: (groupid=0, jobs=1): err= 0: pid=105382: Thu Dec 5 12:28:13 2024 00:24:42.572 read: IOPS=1389, BW=5558KiB/s (5692kB/s)(5564KiB/1001msec) 00:24:42.572 slat (usec): min=8, max=177, avg=20.62, stdev= 9.91 00:24:42.572 clat (usec): min=180, max=4275, avg=374.13, stdev=134.50 00:24:42.572 lat (usec): min=205, max=4302, avg=394.75, stdev=134.43 00:24:42.572 clat percentiles (usec): 00:24:42.572 | 1.00th=[ 212], 5.00th=[ 293], 10.00th=[ 310], 20.00th=[ 330], 00:24:42.572 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 375], 00:24:42.572 | 70.00th=[ 388], 80.00th=[ 408], 90.00th=[ 441], 95.00th=[ 482], 00:24:42.572 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 2540], 99.95th=[ 4293], 00:24:42.572 | 99.99th=[ 4293] 00:24:42.572 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:24:42.572 slat (usec): min=14, max=125, avg=28.36, stdev=11.53 00:24:42.572 clat (usec): min=139, max=2429, avg=261.39, stdev=91.79 00:24:42.572 lat (usec): min=164, max=2469, avg=289.75, stdev=92.90 00:24:42.572 clat percentiles (usec): 00:24:42.572 | 1.00th=[ 167], 5.00th=[ 184], 10.00th=[ 198], 20.00th=[ 217], 00:24:42.572 | 30.00th=[ 231], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 265], 00:24:42.572 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 347], 00:24:42.572 | 99.00th=[ 404], 99.50th=[ 506], 99.90th=[ 1729], 99.95th=[ 2442], 00:24:42.572 | 99.99th=[ 2442] 00:24:42.572 bw ( KiB/s): min= 4272, max= 8016, per=20.34%, avg=6144.00, stdev=2647.41, samples=2 00:24:42.572 iops : min= 1068, max= 2004, avg=1536.00, stdev=661.85, samples=2 00:24:42.572 lat (usec) : 250=25.32%, 500=72.77%, 750=1.64%, 1000=0.07% 00:24:42.572 lat (msec) : 2=0.10%, 4=0.07%, 10=0.03% 00:24:42.572 cpu : usr=1.70%, sys=5.20%, ctx=2932, majf=0, minf=14 00:24:42.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:42.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.572 issued rwts: total=1391,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:42.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:42.572 job3: (groupid=0, jobs=1): err= 0: pid=105383: Thu Dec 5 12:28:13 2024 00:24:42.572 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:24:42.572 slat (nsec): min=8906, max=51435, avg=18292.80, stdev=4019.61 00:24:42.572 clat (usec): min=199, max=750, avg=321.30, stdev=80.19 00:24:42.572 lat (usec): min=217, max=767, avg=339.59, stdev=79.56 00:24:42.572 clat percentiles (usec): 00:24:42.572 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 239], 00:24:42.572 | 30.00th=[ 253], 40.00th=[ 277], 50.00th=[ 326], 60.00th=[ 351], 00:24:42.572 | 70.00th=[ 371], 80.00th=[ 388], 90.00th=[ 420], 95.00th=[ 461], 00:24:42.572 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 611], 99.95th=[ 750], 00:24:42.572 | 99.99th=[ 750] 00:24:42.572 write: IOPS=1910, BW=7640KiB/s (7824kB/s)(7648KiB/1001msec); 0 zone resets 00:24:42.572 slat (usec): min=14, max=146, avg=24.74, stdev= 8.23 00:24:42.572 clat (usec): min=96, max=1262, avg=222.43, stdev=61.65 00:24:42.572 lat (usec): min=157, max=1300, avg=247.18, stdev=60.57 00:24:42.572 clat percentiles (usec): 00:24:42.572 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 178], 00:24:42.572 | 30.00th=[ 188], 40.00th=[ 198], 50.00th=[ 208], 60.00th=[ 219], 00:24:42.572 | 70.00th=[ 241], 80.00th=[ 265], 90.00th=[ 297], 95.00th=[ 326], 00:24:42.572 | 99.00th=[ 371], 99.50th=[ 396], 99.90th=[ 1074], 99.95th=[ 1270], 00:24:42.572 | 99.99th=[ 1270] 00:24:42.572 bw ( KiB/s): min= 7096, max= 8208, per=25.33%, avg=7652.00, stdev=786.30, samples=2 00:24:42.572 iops : min= 1774, max= 2052, avg=1913.00, stdev=196.58, samples=2 00:24:42.572 lat (usec) : 100=0.03%, 250=53.83%, 500=45.01%, 750=1.04%, 1000=0.03% 00:24:42.572 lat (msec) : 2=0.06% 00:24:42.572 cpu : usr=0.60%, sys=6.50%, ctx=3455, majf=0, minf=15 00:24:42.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:42.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.572 issued rwts: total=1536,1912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:42.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:42.572 00:24:42.572 Run status group 0 (all jobs): 00:24:42.572 READ: bw=27.0MiB/s (28.3MB/s), 5558KiB/s-8184KiB/s (5692kB/s-8380kB/s), io=27.0MiB (28.3MB), run=1001-1001msec 00:24:42.572 WRITE: bw=29.5MiB/s (30.9MB/s), 6138KiB/s-8248KiB/s (6285kB/s-8446kB/s), io=29.5MiB (31.0MB), run=1001-1001msec 00:24:42.572 00:24:42.572 Disk stats (read/write): 00:24:42.572 nvme0n1: ios=1586/1810, merge=0/0, ticks=457/379, in_queue=836, util=88.98% 00:24:42.572 nvme0n2: ios=1585/2030, merge=0/0, ticks=408/420, in_queue=828, util=87.99% 00:24:42.572 nvme0n3: ios=1050/1536, merge=0/0, ticks=395/401, in_queue=796, util=88.30% 00:24:42.573 nvme0n4: ios=1426/1536, merge=0/0, ticks=454/336, in_queue=790, util=89.58% 00:24:42.573 12:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:24:42.573 [global] 00:24:42.573 thread=1 00:24:42.573 invalidate=1 00:24:42.573 rw=randwrite 00:24:42.573 time_based=1 00:24:42.573 runtime=1 00:24:42.573 ioengine=libaio 00:24:42.573 direct=1 00:24:42.573 bs=4096 00:24:42.573 iodepth=1 00:24:42.573 norandommap=0 00:24:42.573 numjobs=1 00:24:42.573 00:24:42.573 verify_dump=1 00:24:42.573 verify_backlog=512 00:24:42.573 verify_state_save=0 00:24:42.573 do_verify=1 00:24:42.573 verify=crc32c-intel 00:24:42.573 [job0] 00:24:42.573 filename=/dev/nvme0n1 00:24:42.573 [job1] 00:24:42.573 filename=/dev/nvme0n2 00:24:42.573 [job2] 00:24:42.573 filename=/dev/nvme0n3 00:24:42.573 [job3] 00:24:42.573 filename=/dev/nvme0n4 00:24:42.830 Could not set queue depth (nvme0n1) 00:24:42.830 Could not set queue depth (nvme0n2) 00:24:42.830 Could not set queue depth (nvme0n3) 00:24:42.831 Could not set queue depth (nvme0n4) 00:24:42.831 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:42.831 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:42.831 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:42.831 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:42.831 fio-3.35 00:24:42.831 Starting 4 threads 00:24:44.204 00:24:44.204 job0: (groupid=0, jobs=1): err= 0: pid=105442: Thu Dec 5 12:28:14 2024 00:24:44.204 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:24:44.204 slat (nsec): min=10492, max=70012, avg=16576.18, stdev=5422.40 00:24:44.204 clat (usec): min=157, max=1461, avg=326.21, stdev=78.57 00:24:44.204 lat (usec): min=168, max=1476, avg=342.78, stdev=80.03 00:24:44.204 clat percentiles (usec): 00:24:44.204 | 1.00th=[ 200], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 247], 00:24:44.204 | 30.00th=[ 265], 40.00th=[ 318], 50.00th=[ 338], 60.00th=[ 355], 00:24:44.204 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 400], 95.00th=[ 445], 00:24:44.204 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 627], 99.95th=[ 1467], 00:24:44.204 | 99.99th=[ 1467] 00:24:44.204 write: IOPS=1717, BW=6869KiB/s (7034kB/s)(6876KiB/1001msec); 0 zone resets 00:24:44.204 slat (usec): min=9, max=109, avg=26.28, stdev= 9.65 00:24:44.204 clat (usec): min=146, max=727, avg=245.29, stdev=46.56 00:24:44.204 lat (usec): min=165, max=764, avg=271.57, stdev=51.40 00:24:44.204 clat percentiles (usec): 00:24:44.204 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 198], 00:24:44.204 | 30.00th=[ 223], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:24:44.204 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 322], 00:24:44.204 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 465], 99.95th=[ 725], 00:24:44.204 | 99.99th=[ 725] 00:24:44.204 bw ( KiB/s): min= 7976, max= 7976, per=26.50%, avg=7976.00, stdev= 0.00, samples=1 00:24:44.204 iops : min= 1994, max= 1994, avg=1994.00, stdev= 0.00, samples=1 00:24:44.204 lat (usec) : 250=39.42%, 500=59.45%, 750=1.11% 00:24:44.204 lat (msec) : 2=0.03% 00:24:44.204 cpu : usr=1.10%, sys=5.90%, ctx=3255, majf=0, minf=9 00:24:44.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:44.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.204 issued rwts: total=1536,1719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:44.204 job1: (groupid=0, jobs=1): err= 0: pid=105443: Thu Dec 5 12:28:14 2024 00:24:44.204 read: IOPS=1866, BW=7465KiB/s (7644kB/s)(7472KiB/1001msec) 00:24:44.204 slat (nsec): min=9640, max=52936, avg=14969.08, stdev=4717.62 00:24:44.204 clat (usec): min=170, max=3402, avg=276.38, stdev=109.53 00:24:44.204 lat (usec): min=181, max=3429, avg=291.35, stdev=110.42 00:24:44.204 clat percentiles (usec): 00:24:44.204 | 1.00th=[ 204], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 235], 00:24:44.204 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 265], 00:24:44.204 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 334], 95.00th=[ 396], 00:24:44.204 | 99.00th=[ 529], 99.50th=[ 725], 99.90th=[ 2147], 99.95th=[ 3392], 00:24:44.204 | 99.99th=[ 3392] 00:24:44.204 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:24:44.204 slat (usec): min=14, max=165, avg=21.39, stdev= 6.86 00:24:44.204 clat (usec): min=119, max=497, avg=198.34, stdev=27.85 00:24:44.204 lat (usec): min=158, max=523, avg=219.72, stdev=28.67 00:24:44.204 clat percentiles (usec): 00:24:44.204 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:24:44.204 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 200], 00:24:44.204 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 243], 00:24:44.204 | 99.00th=[ 281], 99.50th=[ 334], 99.90th=[ 449], 99.95th=[ 465], 00:24:44.204 | 99.99th=[ 498] 00:24:44.204 bw ( KiB/s): min= 8344, max= 8344, per=27.73%, avg=8344.00, stdev= 0.00, samples=1 00:24:44.204 iops : min= 2086, max= 2086, avg=2086.00, stdev= 0.00, samples=1 00:24:44.204 lat (usec) : 250=71.30%, 500=28.17%, 750=0.33%, 1000=0.10% 00:24:44.204 lat (msec) : 2=0.05%, 4=0.05% 00:24:44.204 cpu : usr=0.70%, sys=6.00%, ctx=3917, majf=0, minf=5 00:24:44.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:44.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.204 issued rwts: total=1868,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:44.204 job2: (groupid=0, jobs=1): err= 0: pid=105444: Thu Dec 5 12:28:14 2024 00:24:44.204 read: IOPS=1874, BW=7497KiB/s (7676kB/s)(7504KiB/1001msec) 00:24:44.204 slat (usec): min=9, max=235, avg=15.41, stdev= 6.89 00:24:44.204 clat (usec): min=66, max=2000, avg=274.63, stdev=65.44 00:24:44.204 lat (usec): min=200, max=2014, avg=290.04, stdev=64.99 00:24:44.204 clat percentiles (usec): 00:24:44.204 | 1.00th=[ 210], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:24:44.204 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 269], 00:24:44.204 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 330], 95.00th=[ 375], 00:24:44.204 | 99.00th=[ 453], 99.50th=[ 537], 99.90th=[ 1020], 99.95th=[ 2008], 00:24:44.204 | 99.99th=[ 2008] 00:24:44.204 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:24:44.204 slat (nsec): min=13494, max=86849, avg=22383.61, stdev=6523.80 00:24:44.204 clat (usec): min=145, max=372, avg=197.55, stdev=27.09 00:24:44.204 lat (usec): min=166, max=394, avg=219.93, stdev=27.50 00:24:44.204 clat percentiles (usec): 00:24:44.204 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:24:44.204 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 196], 60.00th=[ 202], 00:24:44.204 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 233], 95.00th=[ 249], 00:24:44.204 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 343], 99.95th=[ 367], 00:24:44.204 | 99.99th=[ 371] 00:24:44.204 bw ( KiB/s): min= 8192, max= 8192, per=27.22%, avg=8192.00, stdev= 0.00, samples=1 00:24:44.204 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:24:44.204 lat (usec) : 100=0.03%, 250=67.84%, 500=31.86%, 750=0.23% 00:24:44.204 lat (msec) : 2=0.03%, 4=0.03% 00:24:44.204 cpu : usr=1.70%, sys=5.10%, ctx=3928, majf=0, minf=13 00:24:44.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:44.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.204 issued rwts: total=1876,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:44.204 job3: (groupid=0, jobs=1): err= 0: pid=105445: Thu Dec 5 12:28:14 2024 00:24:44.204 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:24:44.204 slat (nsec): min=7950, max=83615, avg=15302.22, stdev=6651.19 00:24:44.204 clat (usec): min=168, max=1346, avg=327.89, stdev=72.37 00:24:44.204 lat (usec): min=179, max=1367, avg=343.19, stdev=75.66 00:24:44.204 clat percentiles (usec): 00:24:44.204 | 1.00th=[ 215], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 253], 00:24:44.204 | 30.00th=[ 273], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 355], 00:24:44.204 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 396], 95.00th=[ 433], 00:24:44.204 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 594], 99.95th=[ 1352], 00:24:44.205 | 99.99th=[ 1352] 00:24:44.205 write: IOPS=1714, BW=6857KiB/s (7022kB/s)(6864KiB/1001msec); 0 zone resets 00:24:44.205 slat (nsec): min=13020, max=94752, avg=27065.18, stdev=10441.93 00:24:44.205 clat (usec): min=114, max=921, avg=244.88, stdev=48.13 00:24:44.205 lat (usec): min=132, max=959, avg=271.95, stdev=53.21 00:24:44.205 clat percentiles (usec): 00:24:44.205 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 200], 00:24:44.205 | 30.00th=[ 219], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:24:44.205 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 322], 00:24:44.205 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 441], 99.95th=[ 922], 00:24:44.205 | 99.99th=[ 922] 00:24:44.205 bw ( KiB/s): min= 7960, max= 7960, per=26.45%, avg=7960.00, stdev= 0.00, samples=1 00:24:44.205 iops : min= 1990, max= 1990, avg=1990.00, stdev= 0.00, samples=1 00:24:44.205 lat (usec) : 250=37.21%, 500=61.96%, 750=0.77%, 1000=0.03% 00:24:44.205 lat (msec) : 2=0.03% 00:24:44.205 cpu : usr=1.80%, sys=5.10%, ctx=3252, majf=0, minf=17 00:24:44.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:44.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.205 issued rwts: total=1536,1716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:44.205 00:24:44.205 Run status group 0 (all jobs): 00:24:44.205 READ: bw=26.6MiB/s (27.9MB/s), 6138KiB/s-7497KiB/s (6285kB/s-7676kB/s), io=26.6MiB (27.9MB), run=1001-1001msec 00:24:44.205 WRITE: bw=29.4MiB/s (30.8MB/s), 6857KiB/s-8184KiB/s (7022kB/s-8380kB/s), io=29.4MiB (30.8MB), run=1001-1001msec 00:24:44.205 00:24:44.205 Disk stats (read/write): 00:24:44.205 nvme0n1: ios=1276/1536, merge=0/0, ticks=478/402, in_queue=880, util=90.18% 00:24:44.205 nvme0n2: ios=1585/1983, merge=0/0, ticks=427/412, in_queue=839, util=89.01% 00:24:44.205 nvme0n3: ios=1571/1994, merge=0/0, ticks=445/411, in_queue=856, util=90.48% 00:24:44.205 nvme0n4: ios=1241/1536, merge=0/0, ticks=437/396, in_queue=833, util=90.23% 00:24:44.205 12:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:24:44.205 [global] 00:24:44.205 thread=1 00:24:44.205 invalidate=1 00:24:44.205 rw=write 00:24:44.205 time_based=1 00:24:44.205 runtime=1 00:24:44.205 ioengine=libaio 00:24:44.205 direct=1 00:24:44.205 bs=4096 00:24:44.205 iodepth=128 00:24:44.205 norandommap=0 00:24:44.205 numjobs=1 00:24:44.205 00:24:44.205 verify_dump=1 00:24:44.205 verify_backlog=512 00:24:44.205 verify_state_save=0 00:24:44.205 do_verify=1 00:24:44.205 verify=crc32c-intel 00:24:44.205 [job0] 00:24:44.205 filename=/dev/nvme0n1 00:24:44.205 [job1] 00:24:44.205 filename=/dev/nvme0n2 00:24:44.205 [job2] 00:24:44.205 filename=/dev/nvme0n3 00:24:44.205 [job3] 00:24:44.205 filename=/dev/nvme0n4 00:24:44.205 Could not set queue depth (nvme0n1) 00:24:44.205 Could not set queue depth (nvme0n2) 00:24:44.205 Could not set queue depth (nvme0n3) 00:24:44.205 Could not set queue depth (nvme0n4) 00:24:44.205 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:44.205 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:44.205 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:44.205 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:44.205 fio-3.35 00:24:44.205 Starting 4 threads 00:24:45.578 00:24:45.578 job0: (groupid=0, jobs=1): err= 0: pid=105498: Thu Dec 5 12:28:16 2024 00:24:45.578 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:24:45.578 slat (usec): min=4, max=6982, avg=112.66, stdev=603.86 00:24:45.578 clat (usec): min=10410, max=26210, avg=15596.73, stdev=2090.82 00:24:45.578 lat (usec): min=10943, max=26224, avg=15709.39, stdev=2099.90 00:24:45.578 clat percentiles (usec): 00:24:45.578 | 1.00th=[11469], 5.00th=[12387], 10.00th=[12911], 20.00th=[13829], 00:24:45.578 | 30.00th=[14222], 40.00th=[15008], 50.00th=[15533], 60.00th=[16319], 00:24:45.578 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18220], 95.00th=[19268], 00:24:45.578 | 99.00th=[20317], 99.50th=[21103], 99.90th=[22938], 99.95th=[22938], 00:24:45.578 | 99.99th=[26084] 00:24:45.578 write: IOPS=4149, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1001msec); 0 zone resets 00:24:45.578 slat (usec): min=8, max=9030, avg=121.16, stdev=740.39 00:24:45.578 clat (usec): min=495, max=24432, avg=15045.29, stdev=1753.14 00:24:45.578 lat (usec): min=6353, max=24472, avg=15166.45, stdev=1873.49 00:24:45.578 clat percentiles (usec): 00:24:45.578 | 1.00th=[ 7504], 5.00th=[13042], 10.00th=[13435], 20.00th=[14091], 00:24:45.578 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:24:45.578 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16712], 95.00th=[17695], 00:24:45.578 | 99.00th=[20317], 99.50th=[21365], 99.90th=[22938], 99.95th=[23462], 00:24:45.578 | 99.99th=[24511] 00:24:45.578 bw ( KiB/s): min=16384, max=16416, per=33.59%, avg=16400.00, stdev=22.63, samples=2 00:24:45.578 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:24:45.578 lat (usec) : 500=0.01% 00:24:45.578 lat (msec) : 10=0.73%, 20=97.61%, 50=1.65% 00:24:45.578 cpu : usr=4.00%, sys=12.10%, ctx=262, majf=0, minf=10 00:24:45.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:45.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:45.578 issued rwts: total=4096,4154,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:45.578 job1: (groupid=0, jobs=1): err= 0: pid=105499: Thu Dec 5 12:28:16 2024 00:24:45.578 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec) 00:24:45.578 slat (usec): min=6, max=10251, avg=181.21, stdev=896.51 00:24:45.578 clat (usec): min=12995, max=37600, avg=21551.40, stdev=4509.20 00:24:45.578 lat (usec): min=13019, max=43697, avg=21732.61, stdev=4601.68 00:24:45.578 clat percentiles (usec): 00:24:45.578 | 1.00th=[14484], 5.00th=[16909], 10.00th=[17171], 20.00th=[18220], 00:24:45.578 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19792], 60.00th=[20579], 00:24:45.578 | 70.00th=[23725], 80.00th=[25822], 90.00th=[28181], 95.00th=[30278], 00:24:45.578 | 99.00th=[33817], 99.50th=[35390], 99.90th=[36439], 99.95th=[36963], 00:24:45.578 | 99.99th=[37487] 00:24:45.578 write: IOPS=2280, BW=9121KiB/s (9340kB/s)(9212KiB/1010msec); 0 zone resets 00:24:45.578 slat (usec): min=8, max=16859, avg=264.75, stdev=969.78 00:24:45.578 clat (usec): min=7108, max=64227, avg=36263.97, stdev=11973.32 00:24:45.578 lat (usec): min=9794, max=64271, avg=36528.72, stdev=12048.46 00:24:45.578 clat percentiles (usec): 00:24:45.578 | 1.00th=[12518], 5.00th=[19268], 10.00th=[23462], 20.00th=[26346], 00:24:45.578 | 30.00th=[30540], 40.00th=[32375], 50.00th=[33162], 60.00th=[35390], 00:24:45.578 | 70.00th=[39060], 80.00th=[47973], 90.00th=[55837], 95.00th=[60031], 00:24:45.578 | 99.00th=[63177], 99.50th=[63701], 99.90th=[64226], 99.95th=[64226], 00:24:45.578 | 99.99th=[64226] 00:24:45.578 bw ( KiB/s): min= 8696, max= 8704, per=17.82%, avg=8700.00, stdev= 5.66, samples=2 00:24:45.578 iops : min= 2174, max= 2176, avg=2175.00, stdev= 1.41, samples=2 00:24:45.578 lat (msec) : 10=0.21%, 20=29.23%, 50=61.50%, 100=9.06% 00:24:45.578 cpu : usr=2.68%, sys=7.23%, ctx=298, majf=0, minf=13 00:24:45.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:45.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:45.578 issued rwts: total=2048,2303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:45.578 job2: (groupid=0, jobs=1): err= 0: pid=105500: Thu Dec 5 12:28:16 2024 00:24:45.578 read: IOPS=1645, BW=6583KiB/s (6741kB/s)(6616KiB/1005msec) 00:24:45.578 slat (usec): min=2, max=11396, avg=209.74, stdev=978.60 00:24:45.578 clat (usec): min=2321, max=46374, avg=24491.89, stdev=5627.11 00:24:45.578 lat (usec): min=5567, max=46394, avg=24701.63, stdev=5697.08 00:24:45.578 clat percentiles (usec): 00:24:45.578 | 1.00th=[ 5800], 5.00th=[19530], 10.00th=[21103], 20.00th=[22414], 00:24:45.578 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23725], 60.00th=[24511], 00:24:45.578 | 70.00th=[25035], 80.00th=[26084], 90.00th=[29754], 95.00th=[34341], 00:24:45.578 | 99.00th=[43254], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:24:45.578 | 99.99th=[46400] 00:24:45.578 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:24:45.578 slat (usec): min=10, max=13098, avg=310.58, stdev=1183.59 00:24:45.578 clat (usec): min=21632, max=76362, avg=41627.31, stdev=12501.60 00:24:45.578 lat (usec): min=21654, max=76429, avg=41937.89, stdev=12576.25 00:24:45.578 clat percentiles (usec): 00:24:45.578 | 1.00th=[25297], 5.00th=[29230], 10.00th=[31065], 20.00th=[32375], 00:24:45.578 | 30.00th=[33817], 40.00th=[34866], 50.00th=[36963], 60.00th=[38536], 00:24:45.578 | 70.00th=[43254], 80.00th=[52691], 90.00th=[64226], 95.00th=[68682], 00:24:45.578 | 99.00th=[74974], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:24:45.578 | 99.99th=[76022] 00:24:45.578 bw ( KiB/s): min= 8112, max= 8208, per=16.71%, avg=8160.00, stdev=67.88, samples=2 00:24:45.578 iops : min= 2028, max= 2052, avg=2040.00, stdev=16.97, samples=2 00:24:45.578 lat (msec) : 4=0.03%, 10=1.13%, 20=1.13%, 50=85.31%, 100=12.40% 00:24:45.578 cpu : usr=2.39%, sys=6.57%, ctx=303, majf=0, minf=15 00:24:45.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:24:45.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:45.578 issued rwts: total=1654,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:45.578 job3: (groupid=0, jobs=1): err= 0: pid=105501: Thu Dec 5 12:28:16 2024 00:24:45.578 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:24:45.578 slat (usec): min=5, max=5079, avg=135.64, stdev=592.70 00:24:45.578 clat (usec): min=12653, max=22735, avg=17636.43, stdev=1423.74 00:24:45.578 lat (usec): min=13364, max=22746, avg=17772.07, stdev=1364.93 00:24:45.578 clat percentiles (usec): 00:24:45.578 | 1.00th=[14091], 5.00th=[14877], 10.00th=[15664], 20.00th=[16712], 00:24:45.578 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:24:45.578 | 70.00th=[18220], 80.00th=[18482], 90.00th=[19268], 95.00th=[20055], 00:24:45.578 | 99.00th=[21365], 99.50th=[21627], 99.90th=[22676], 99.95th=[22676], 00:24:45.578 | 99.99th=[22676] 00:24:45.578 write: IOPS=3819, BW=14.9MiB/s (15.6MB/s)(14.9MiB/1001msec); 0 zone resets 00:24:45.578 slat (usec): min=7, max=5172, avg=128.45, stdev=614.71 00:24:45.578 clat (usec): min=599, max=20158, avg=16476.25, stdev=2042.21 00:24:45.578 lat (usec): min=4720, max=20256, avg=16604.69, stdev=1977.39 00:24:45.578 clat percentiles (usec): 00:24:45.578 | 1.00th=[ 8586], 5.00th=[13435], 10.00th=[14222], 20.00th=[15139], 00:24:45.578 | 30.00th=[15795], 40.00th=[16450], 50.00th=[16909], 60.00th=[17433], 00:24:45.578 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:24:45.578 | 99.00th=[19268], 99.50th=[19268], 99.90th=[20055], 99.95th=[20055], 00:24:45.578 | 99.99th=[20055] 00:24:45.578 bw ( KiB/s): min=16272, max=16272, per=33.33%, avg=16272.00, stdev= 0.00, samples=1 00:24:45.578 iops : min= 4068, max= 4068, avg=4068.00, stdev= 0.00, samples=1 00:24:45.578 lat (usec) : 750=0.01% 00:24:45.578 lat (msec) : 10=0.86%, 20=96.83%, 50=2.30% 00:24:45.578 cpu : usr=2.90%, sys=7.50%, ctx=347, majf=0, minf=7 00:24:45.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:24:45.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:45.578 issued rwts: total=3584,3823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:45.578 00:24:45.578 Run status group 0 (all jobs): 00:24:45.578 READ: bw=44.0MiB/s (46.2MB/s), 6583KiB/s-16.0MiB/s (6741kB/s-16.8MB/s), io=44.5MiB (46.6MB), run=1001-1010msec 00:24:45.578 WRITE: bw=47.7MiB/s (50.0MB/s), 8151KiB/s-16.2MiB/s (8347kB/s-17.0MB/s), io=48.2MiB (50.5MB), run=1001-1010msec 00:24:45.578 00:24:45.578 Disk stats (read/write): 00:24:45.578 nvme0n1: ios=3503/3584, merge=0/0, ticks=25228/23543, in_queue=48771, util=87.98% 00:24:45.578 nvme0n2: ios=1568/1991, merge=0/0, ticks=16608/35326, in_queue=51934, util=87.84% 00:24:45.578 nvme0n3: ios=1546/1703, merge=0/0, ticks=12913/21856, in_queue=34769, util=91.92% 00:24:45.578 nvme0n4: ios=3072/3252, merge=0/0, ticks=13087/12232, in_queue=25319, util=89.68% 00:24:45.578 12:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:24:45.578 [global] 00:24:45.578 thread=1 00:24:45.578 invalidate=1 00:24:45.578 rw=randwrite 00:24:45.578 time_based=1 00:24:45.578 runtime=1 00:24:45.578 ioengine=libaio 00:24:45.578 direct=1 00:24:45.578 bs=4096 00:24:45.578 iodepth=128 00:24:45.578 norandommap=0 00:24:45.578 numjobs=1 00:24:45.578 00:24:45.578 verify_dump=1 00:24:45.578 verify_backlog=512 00:24:45.578 verify_state_save=0 00:24:45.578 do_verify=1 00:24:45.578 verify=crc32c-intel 00:24:45.578 [job0] 00:24:45.578 filename=/dev/nvme0n1 00:24:45.578 [job1] 00:24:45.578 filename=/dev/nvme0n2 00:24:45.578 [job2] 00:24:45.578 filename=/dev/nvme0n3 00:24:45.578 [job3] 00:24:45.578 filename=/dev/nvme0n4 00:24:45.578 Could not set queue depth (nvme0n1) 00:24:45.578 Could not set queue depth (nvme0n2) 00:24:45.578 Could not set queue depth (nvme0n3) 00:24:45.578 Could not set queue depth (nvme0n4) 00:24:45.578 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:45.578 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:45.578 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:45.578 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:45.578 fio-3.35 00:24:45.578 Starting 4 threads 00:24:46.954 00:24:46.954 job0: (groupid=0, jobs=1): err= 0: pid=105556: Thu Dec 5 12:28:17 2024 00:24:46.954 read: IOPS=2005, BW=8024KiB/s (8216kB/s)(8192KiB/1021msec) 00:24:46.954 slat (usec): min=6, max=28528, avg=169.61, stdev=1201.78 00:24:46.954 clat (usec): min=5651, max=54068, avg=20397.31, stdev=8727.35 00:24:46.954 lat (usec): min=5664, max=54098, avg=20566.91, stdev=8814.84 00:24:46.954 clat percentiles (usec): 00:24:46.954 | 1.00th=[ 6259], 5.00th=[10945], 10.00th=[12649], 20.00th=[13435], 00:24:46.954 | 30.00th=[14222], 40.00th=[14484], 50.00th=[15926], 60.00th=[20841], 00:24:46.954 | 70.00th=[25035], 80.00th=[28705], 90.00th=[33817], 95.00th=[37487], 00:24:46.954 | 99.00th=[42730], 99.50th=[42730], 99.90th=[49546], 99.95th=[51119], 00:24:46.954 | 99.99th=[54264] 00:24:46.954 write: IOPS=2207, BW=8831KiB/s (9042kB/s)(9016KiB/1021msec); 0 zone resets 00:24:46.954 slat (usec): min=6, max=20522, avg=283.70, stdev=1485.92 00:24:46.954 clat (msec): min=3, max=131, avg=38.84, stdev=24.24 00:24:46.954 lat (msec): min=3, max=131, avg=39.12, stdev=24.35 00:24:46.954 clat percentiles (msec): 00:24:46.954 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 27], 00:24:46.954 | 30.00th=[ 28], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 29], 00:24:46.954 | 70.00th=[ 35], 80.00th=[ 60], 90.00th=[ 77], 95.00th=[ 88], 00:24:46.954 | 99.00th=[ 128], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 132], 00:24:46.954 | 99.99th=[ 132] 00:24:46.954 bw ( KiB/s): min= 8192, max= 8816, per=21.85%, avg=8504.00, stdev=441.23, samples=2 00:24:46.954 iops : min= 2048, max= 2204, avg=2126.00, stdev=110.31, samples=2 00:24:46.954 lat (msec) : 4=0.12%, 10=3.53%, 20=26.87%, 50=57.42%, 100=10.23% 00:24:46.954 lat (msec) : 250=1.84% 00:24:46.954 cpu : usr=2.35%, sys=5.39%, ctx=283, majf=0, minf=7 00:24:46.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:24:46.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:46.954 issued rwts: total=2048,2254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:46.954 job1: (groupid=0, jobs=1): err= 0: pid=105557: Thu Dec 5 12:28:17 2024 00:24:46.954 read: IOPS=2499, BW=9996KiB/s (10.2MB/s)(9.82MiB/1006msec) 00:24:46.954 slat (usec): min=4, max=17619, avg=188.61, stdev=1036.47 00:24:46.954 clat (usec): min=3692, max=45420, avg=23397.97, stdev=5154.10 00:24:46.954 lat (usec): min=5293, max=45470, avg=23586.58, stdev=5203.55 00:24:46.954 clat percentiles (usec): 00:24:46.954 | 1.00th=[ 5932], 5.00th=[12256], 10.00th=[18220], 20.00th=[20055], 00:24:46.954 | 30.00th=[21103], 40.00th=[22152], 50.00th=[23200], 60.00th=[24773], 00:24:46.954 | 70.00th=[26346], 80.00th=[27657], 90.00th=[29754], 95.00th=[31589], 00:24:46.954 | 99.00th=[33817], 99.50th=[33817], 99.90th=[36963], 99.95th=[39060], 00:24:46.954 | 99.99th=[45351] 00:24:46.954 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:24:46.954 slat (usec): min=5, max=30378, avg=197.45, stdev=1166.22 00:24:46.954 clat (usec): min=6286, max=54420, avg=26755.75, stdev=6200.39 00:24:46.955 lat (usec): min=6317, max=54477, avg=26953.20, stdev=6277.97 00:24:46.955 clat percentiles (usec): 00:24:46.955 | 1.00th=[10290], 5.00th=[17433], 10.00th=[20055], 20.00th=[23725], 00:24:46.955 | 30.00th=[24511], 40.00th=[25560], 50.00th=[26084], 60.00th=[26608], 00:24:46.955 | 70.00th=[27132], 80.00th=[30278], 90.00th=[34341], 95.00th=[40633], 00:24:46.955 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:24:46.955 | 99.99th=[54264] 00:24:46.955 bw ( KiB/s): min= 9576, max=10925, per=26.34%, avg=10250.50, stdev=953.89, samples=2 00:24:46.955 iops : min= 2394, max= 2731, avg=2562.50, stdev=238.29, samples=2 00:24:46.955 lat (msec) : 4=0.02%, 10=1.40%, 20=13.44%, 50=85.12%, 100=0.02% 00:24:46.955 cpu : usr=3.08%, sys=7.46%, ctx=661, majf=0, minf=9 00:24:46.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:46.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:46.955 issued rwts: total=2514,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:46.955 job2: (groupid=0, jobs=1): err= 0: pid=105558: Thu Dec 5 12:28:17 2024 00:24:46.955 read: IOPS=2488, BW=9954KiB/s (10.2MB/s)(9.79MiB/1007msec) 00:24:46.955 slat (usec): min=3, max=27223, avg=198.33, stdev=1196.66 00:24:46.955 clat (usec): min=5742, max=55274, avg=25174.20, stdev=4742.52 00:24:46.955 lat (usec): min=13649, max=55288, avg=25372.53, stdev=4823.93 00:24:46.955 clat percentiles (usec): 00:24:46.955 | 1.00th=[14615], 5.00th=[17957], 10.00th=[20317], 20.00th=[21365], 00:24:46.955 | 30.00th=[22414], 40.00th=[23725], 50.00th=[24511], 60.00th=[25822], 00:24:46.955 | 70.00th=[27657], 80.00th=[29230], 90.00th=[30802], 95.00th=[32637], 00:24:46.955 | 99.00th=[35914], 99.50th=[38536], 99.90th=[55313], 99.95th=[55313], 00:24:46.955 | 99.99th=[55313] 00:24:46.955 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:24:46.955 slat (usec): min=5, max=18906, avg=188.64, stdev=994.73 00:24:46.955 clat (usec): min=3436, max=44106, avg=25172.26, stdev=5484.74 00:24:46.955 lat (usec): min=3460, max=44139, avg=25360.90, stdev=5588.53 00:24:46.955 clat percentiles (usec): 00:24:46.955 | 1.00th=[11076], 5.00th=[12125], 10.00th=[19530], 20.00th=[22152], 00:24:46.955 | 30.00th=[23462], 40.00th=[24249], 50.00th=[25822], 60.00th=[26346], 00:24:46.955 | 70.00th=[26870], 80.00th=[27657], 90.00th=[31065], 95.00th=[35914], 00:24:46.955 | 99.00th=[39060], 99.50th=[40109], 99.90th=[41681], 99.95th=[42206], 00:24:46.955 | 99.99th=[44303] 00:24:46.955 bw ( KiB/s): min= 9216, max=11264, per=26.31%, avg=10240.00, stdev=1448.15, samples=2 00:24:46.955 iops : min= 2304, max= 2816, avg=2560.00, stdev=362.04, samples=2 00:24:46.955 lat (msec) : 4=0.20%, 10=0.02%, 20=8.96%, 50=90.70%, 100=0.12% 00:24:46.955 cpu : usr=2.68%, sys=7.06%, ctx=669, majf=0, minf=6 00:24:46.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:46.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:46.955 issued rwts: total=2506,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:46.955 job3: (groupid=0, jobs=1): err= 0: pid=105559: Thu Dec 5 12:28:17 2024 00:24:46.955 read: IOPS=2404, BW=9619KiB/s (9850kB/s)(9696KiB/1008msec) 00:24:46.955 slat (usec): min=3, max=19950, avg=227.28, stdev=1364.68 00:24:46.955 clat (usec): min=1542, max=85906, avg=22816.10, stdev=14122.81 00:24:46.955 lat (usec): min=6200, max=85920, avg=23043.38, stdev=14297.59 00:24:46.955 clat percentiles (usec): 00:24:46.955 | 1.00th=[ 7373], 5.00th=[12911], 10.00th=[13960], 20.00th=[14615], 00:24:46.955 | 30.00th=[15270], 40.00th=[16188], 50.00th=[16909], 60.00th=[19792], 00:24:46.955 | 70.00th=[22676], 80.00th=[29492], 90.00th=[35914], 95.00th=[56361], 00:24:46.955 | 99.00th=[82314], 99.50th=[84411], 99.90th=[85459], 99.95th=[85459], 00:24:46.955 | 99.99th=[85459] 00:24:46.955 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:24:46.955 slat (usec): min=5, max=23978, avg=170.83, stdev=1057.98 00:24:46.955 clat (usec): min=4700, max=85867, avg=28236.53, stdev=15660.99 00:24:46.955 lat (usec): min=4723, max=85876, avg=28407.36, stdev=15719.06 00:24:46.955 clat percentiles (usec): 00:24:46.955 | 1.00th=[ 6521], 5.00th=[11731], 10.00th=[13304], 20.00th=[14877], 00:24:46.955 | 30.00th=[19792], 40.00th=[25560], 50.00th=[27132], 60.00th=[28181], 00:24:46.955 | 70.00th=[28967], 80.00th=[29230], 90.00th=[49546], 95.00th=[70779], 00:24:46.955 | 99.00th=[73925], 99.50th=[73925], 99.90th=[84411], 99.95th=[85459], 00:24:46.955 | 99.99th=[85459] 00:24:46.955 bw ( KiB/s): min= 8720, max=11783, per=26.34%, avg=10251.50, stdev=2165.87, samples=2 00:24:46.955 iops : min= 2180, max= 2945, avg=2562.50, stdev=540.94, samples=2 00:24:46.955 lat (msec) : 2=0.02%, 10=2.11%, 20=42.50%, 50=47.53%, 100=7.85% 00:24:46.955 cpu : usr=2.28%, sys=5.66%, ctx=276, majf=0, minf=5 00:24:46.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:46.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:46.955 issued rwts: total=2424,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:46.955 00:24:46.955 Run status group 0 (all jobs): 00:24:46.955 READ: bw=36.3MiB/s (38.1MB/s), 8024KiB/s-9996KiB/s (8216kB/s-10.2MB/s), io=37.1MiB (38.9MB), run=1006-1021msec 00:24:46.955 WRITE: bw=38.0MiB/s (39.9MB/s), 8831KiB/s-9.94MiB/s (9042kB/s-10.4MB/s), io=38.8MiB (40.7MB), run=1006-1021msec 00:24:46.955 00:24:46.955 Disk stats (read/write): 00:24:46.955 nvme0n1: ios=1586/1903, merge=0/0, ticks=31867/72842, in_queue=104709, util=88.28% 00:24:46.955 nvme0n2: ios=2097/2269, merge=0/0, ticks=29064/36992, in_queue=66056, util=87.87% 00:24:46.955 nvme0n3: ios=2048/2306, merge=0/0, ticks=31701/34981, in_queue=66682, util=88.43% 00:24:46.955 nvme0n4: ios=2048/2303, merge=0/0, ticks=44157/59473, in_queue=103630, util=89.58% 00:24:46.955 12:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:24:46.955 12:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=105578 00:24:46.955 12:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:24:46.955 12:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:24:46.955 [global] 00:24:46.955 thread=1 00:24:46.955 invalidate=1 00:24:46.955 rw=read 00:24:46.955 time_based=1 00:24:46.955 runtime=10 00:24:46.955 ioengine=libaio 00:24:46.955 direct=1 00:24:46.955 bs=4096 00:24:46.955 iodepth=1 00:24:46.955 norandommap=1 00:24:46.955 numjobs=1 00:24:46.955 00:24:46.955 [job0] 00:24:46.955 filename=/dev/nvme0n1 00:24:46.955 [job1] 00:24:46.955 filename=/dev/nvme0n2 00:24:46.955 [job2] 00:24:46.955 filename=/dev/nvme0n3 00:24:46.955 [job3] 00:24:46.955 filename=/dev/nvme0n4 00:24:46.955 Could not set queue depth (nvme0n1) 00:24:46.955 Could not set queue depth (nvme0n2) 00:24:46.955 Could not set queue depth (nvme0n3) 00:24:46.955 Could not set queue depth (nvme0n4) 00:24:46.955 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:46.955 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:46.955 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:46.955 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:46.955 fio-3.35 00:24:46.955 Starting 4 threads 00:24:50.239 12:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:24:50.239 fio: pid=105621, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:24:50.239 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=26628096, buflen=4096 00:24:50.239 12:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:24:50.497 fio: pid=105620, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:24:50.497 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=49065984, buflen=4096 00:24:50.497 12:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:50.497 12:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:24:50.497 fio: pid=105618, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:24:50.497 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=34418688, buflen=4096 00:24:50.756 12:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:50.756 12:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:24:51.014 fio: pid=105619, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:24:51.014 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52920320, buflen=4096 00:24:51.014 00:24:51.014 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105618: Thu Dec 5 12:28:21 2024 00:24:51.014 read: IOPS=2455, BW=9819KiB/s (10.1MB/s)(32.8MiB/3423msec) 00:24:51.014 slat (usec): min=7, max=12592, avg=30.41, stdev=206.66 00:24:51.014 clat (usec): min=139, max=3046, avg=374.37, stdev=145.10 00:24:51.014 lat (usec): min=158, max=13097, avg=404.78, stdev=254.07 00:24:51.014 clat percentiles (usec): 00:24:51.014 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 182], 20.00th=[ 245], 00:24:51.014 | 30.00th=[ 293], 40.00th=[ 367], 50.00th=[ 396], 60.00th=[ 412], 00:24:51.014 | 70.00th=[ 429], 80.00th=[ 453], 90.00th=[ 545], 95.00th=[ 594], 00:24:51.014 | 99.00th=[ 652], 99.50th=[ 766], 99.90th=[ 2008], 99.95th=[ 2147], 00:24:51.014 | 99.99th=[ 3032] 00:24:51.014 bw ( KiB/s): min= 8512, max= 9384, per=20.68%, avg=8886.67, stdev=326.55, samples=6 00:24:51.014 iops : min= 2128, max= 2346, avg=2221.67, stdev=81.64, samples=6 00:24:51.014 lat (usec) : 250=20.93%, 500=66.96%, 750=11.55%, 1000=0.31% 00:24:51.014 lat (msec) : 2=0.13%, 4=0.11% 00:24:51.014 cpu : usr=1.02%, sys=5.29%, ctx=8423, majf=0, minf=1 00:24:51.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:51.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.014 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.014 issued rwts: total=8404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:51.014 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105619: Thu Dec 5 12:28:21 2024 00:24:51.014 read: IOPS=3486, BW=13.6MiB/s (14.3MB/s)(50.5MiB/3706msec) 00:24:51.014 slat (usec): min=9, max=11969, avg=19.27, stdev=176.15 00:24:51.014 clat (usec): min=3, max=36893, avg=266.09, stdev=330.12 00:24:51.014 lat (usec): min=165, max=36906, avg=285.36, stdev=374.87 00:24:51.014 clat percentiles (usec): 00:24:51.014 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 194], 20.00th=[ 229], 00:24:51.014 | 30.00th=[ 243], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:24:51.014 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 334], 00:24:51.014 | 99.00th=[ 388], 99.50th=[ 461], 99.90th=[ 807], 99.95th=[ 1893], 00:24:51.014 | 99.99th=[ 3720] 00:24:51.014 bw ( KiB/s): min=12872, max=15362, per=32.09%, avg=13785.43, stdev=777.08, samples=7 00:24:51.014 iops : min= 3218, max= 3840, avg=3446.29, stdev=194.10, samples=7 00:24:51.014 lat (usec) : 4=0.01%, 100=0.01%, 250=35.65%, 500=63.92%, 750=0.28% 00:24:51.014 lat (usec) : 1000=0.06% 00:24:51.014 lat (msec) : 2=0.03%, 4=0.03%, 50=0.01% 00:24:51.014 cpu : usr=1.00%, sys=4.43%, ctx=12963, majf=0, minf=1 00:24:51.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:51.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.014 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.014 issued rwts: total=12921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:51.014 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105620: Thu Dec 5 12:28:21 2024 00:24:51.014 read: IOPS=3759, BW=14.7MiB/s (15.4MB/s)(46.8MiB/3187msec) 00:24:51.014 slat (usec): min=9, max=8810, avg=18.33, stdev=104.62 00:24:51.014 clat (usec): min=153, max=2661, avg=246.21, stdev=51.05 00:24:51.014 lat (usec): min=167, max=8999, avg=264.54, stdev=116.14 00:24:51.014 clat percentiles (usec): 00:24:51.014 | 1.00th=[ 174], 5.00th=[ 196], 10.00th=[ 206], 20.00th=[ 219], 00:24:51.014 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 251], 00:24:51.014 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 302], 00:24:51.014 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 693], 99.95th=[ 1057], 00:24:51.014 | 99.99th=[ 2442] 00:24:51.014 bw ( KiB/s): min=14288, max=15640, per=35.04%, avg=15052.00, stdev=554.04, samples=6 00:24:51.014 iops : min= 3572, max= 3910, avg=3763.00, stdev=138.51, samples=6 00:24:51.014 lat (usec) : 250=58.53%, 500=41.33%, 750=0.05%, 1000=0.03% 00:24:51.014 lat (msec) : 2=0.03%, 4=0.02% 00:24:51.014 cpu : usr=1.07%, sys=5.12%, ctx=11982, majf=0, minf=2 00:24:51.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:51.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.014 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.014 issued rwts: total=11980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:51.014 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105621: Thu Dec 5 12:28:21 2024 00:24:51.014 read: IOPS=2207, BW=8830KiB/s (9042kB/s)(25.4MiB/2945msec) 00:24:51.014 slat (nsec): min=15090, max=94955, avg=28322.20, stdev=8653.05 00:24:51.014 clat (usec): min=202, max=5264, avg=421.53, stdev=125.74 00:24:51.014 lat (usec): min=220, max=5317, avg=449.84, stdev=125.77 00:24:51.014 clat percentiles (usec): 00:24:51.014 | 1.00th=[ 237], 5.00th=[ 277], 10.00th=[ 302], 20.00th=[ 355], 00:24:51.014 | 30.00th=[ 383], 40.00th=[ 404], 50.00th=[ 416], 60.00th=[ 429], 00:24:51.014 | 70.00th=[ 445], 80.00th=[ 469], 90.00th=[ 553], 95.00th=[ 586], 00:24:51.014 | 99.00th=[ 660], 99.50th=[ 766], 99.90th=[ 996], 99.95th=[ 2638], 00:24:51.014 | 99.99th=[ 5276] 00:24:51.014 bw ( KiB/s): min= 8656, max= 9224, per=20.77%, avg=8923.20, stdev=233.83, samples=5 00:24:51.014 iops : min= 2164, max= 2306, avg=2230.80, stdev=58.46, samples=5 00:24:51.014 lat (usec) : 250=2.23%, 500=82.81%, 750=14.35%, 1000=0.51% 00:24:51.014 lat (msec) : 2=0.02%, 4=0.06%, 10=0.02% 00:24:51.014 cpu : usr=1.19%, sys=5.06%, ctx=6504, majf=0, minf=2 00:24:51.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:51.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.014 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.014 issued rwts: total=6502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:51.014 00:24:51.014 Run status group 0 (all jobs): 00:24:51.014 READ: bw=42.0MiB/s (44.0MB/s), 8830KiB/s-14.7MiB/s (9042kB/s-15.4MB/s), io=155MiB (163MB), run=2945-3706msec 00:24:51.014 00:24:51.014 Disk stats (read/write): 00:24:51.014 nvme0n1: ios=8109/0, merge=0/0, ticks=3151/0, in_queue=3151, util=95.11% 00:24:51.014 nvme0n2: ios=12481/0, merge=0/0, ticks=3387/0, in_queue=3387, util=95.58% 00:24:51.014 nvme0n3: ios=11719/0, merge=0/0, ticks=2949/0, in_queue=2949, util=96.36% 00:24:51.014 nvme0n4: ios=6336/0, merge=0/0, ticks=2705/0, in_queue=2705, util=96.62% 00:24:51.014 12:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:51.014 12:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:24:51.272 12:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:51.272 12:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:24:51.530 12:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:51.530 12:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:24:51.788 12:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:51.788 12:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:24:52.047 12:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:52.047 12:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 105578 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:52.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:52.305 nvmf hotplug test: fio failed as expected 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:24:52.305 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.564 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:24:52.564 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:24:52.564 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:24:52.564 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:24:52.564 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:24:52.564 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:52.564 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:24:52.564 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:52.564 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:24:52.564 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:52.564 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:52.564 rmmod nvme_tcp 00:24:52.564 rmmod nvme_fabrics 00:24:52.822 rmmod nvme_keyring 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 105112 ']' 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 105112 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 105112 ']' 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 105112 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105112 00:24:52.822 killing process with pid 105112 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105112' 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 105112 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 105112 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:52.822 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:24:53.080 00:24:53.080 real 0m19.364s 00:24:53.080 user 0m59.643s 00:24:53.080 sys 0m9.728s 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.080 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.080 ************************************ 00:24:53.080 END TEST nvmf_fio_target 00:24:53.080 ************************************ 00:24:53.339 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:24:53.339 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:53.339 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.339 12:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:53.339 ************************************ 00:24:53.339 START TEST nvmf_bdevio 00:24:53.339 ************************************ 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:24:53.339 * Looking for test storage... 00:24:53.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:53.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.339 --rc genhtml_branch_coverage=1 00:24:53.339 --rc genhtml_function_coverage=1 00:24:53.339 --rc genhtml_legend=1 00:24:53.339 --rc geninfo_all_blocks=1 00:24:53.339 --rc geninfo_unexecuted_blocks=1 00:24:53.339 00:24:53.339 ' 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:53.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.339 --rc genhtml_branch_coverage=1 00:24:53.339 --rc genhtml_function_coverage=1 00:24:53.339 --rc genhtml_legend=1 00:24:53.339 --rc geninfo_all_blocks=1 00:24:53.339 --rc geninfo_unexecuted_blocks=1 00:24:53.339 00:24:53.339 ' 00:24:53.339 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:53.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.339 --rc genhtml_branch_coverage=1 00:24:53.340 --rc genhtml_function_coverage=1 00:24:53.340 --rc genhtml_legend=1 00:24:53.340 --rc geninfo_all_blocks=1 00:24:53.340 --rc geninfo_unexecuted_blocks=1 00:24:53.340 00:24:53.340 ' 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:53.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.340 --rc genhtml_branch_coverage=1 00:24:53.340 --rc genhtml_function_coverage=1 00:24:53.340 --rc genhtml_legend=1 00:24:53.340 --rc geninfo_all_blocks=1 00:24:53.340 --rc geninfo_unexecuted_blocks=1 00:24:53.340 00:24:53.340 ' 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.340 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:53.599 Cannot find device "nvmf_init_br" 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:53.599 Cannot find device "nvmf_init_br2" 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:53.599 Cannot find device "nvmf_tgt_br" 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:53.599 Cannot find device "nvmf_tgt_br2" 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:53.599 Cannot find device "nvmf_init_br" 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:53.599 Cannot find device "nvmf_init_br2" 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:53.599 Cannot find device "nvmf_tgt_br" 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:53.599 Cannot find device "nvmf_tgt_br2" 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:53.599 Cannot find device "nvmf_br" 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:53.599 Cannot find device "nvmf_init_if" 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:53.599 Cannot find device "nvmf_init_if2" 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:53.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:53.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:53.599 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:53.858 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:53.858 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:24:53.858 00:24:53.858 --- 10.0.0.3 ping statistics --- 00:24:53.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.858 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:53.858 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:53.858 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:24:53.858 00:24:53.858 --- 10.0.0.4 ping statistics --- 00:24:53.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.858 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:53.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:24:53.858 00:24:53.858 --- 10.0.0.1 ping statistics --- 00:24:53.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.858 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:53.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:24:53.858 00:24:53.858 --- 10.0.0.2 ping statistics --- 00:24:53.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.858 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:53.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=106003 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 106003 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 106003 ']' 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.858 12:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:53.858 [2024-12-05 12:28:24.686307] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:53.858 [2024-12-05 12:28:24.687810] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:24:53.858 [2024-12-05 12:28:24.687884] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.116 [2024-12-05 12:28:24.842421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.116 [2024-12-05 12:28:24.900473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.116 [2024-12-05 12:28:24.901034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.116 [2024-12-05 12:28:24.901601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.116 [2024-12-05 12:28:24.902133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.116 [2024-12-05 12:28:24.902467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.116 [2024-12-05 12:28:24.904217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:54.116 [2024-12-05 12:28:24.904300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:54.116 [2024-12-05 12:28:24.904423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:54.116 [2024-12-05 12:28:24.904433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.375 [2024-12-05 12:28:25.013838] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:54.375 [2024-12-05 12:28:25.014149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:54.375 [2024-12-05 12:28:25.014471] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:54.375 [2024-12-05 12:28:25.016005] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:54.375 [2024-12-05 12:28:25.016968] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:54.375 [2024-12-05 12:28:25.114201] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:54.375 Malloc0 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:54.375 [2024-12-05 12:28:25.210359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:54.375 { 00:24:54.375 "params": { 00:24:54.375 "name": "Nvme$subsystem", 00:24:54.375 "trtype": "$TEST_TRANSPORT", 00:24:54.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.375 "adrfam": "ipv4", 00:24:54.375 "trsvcid": "$NVMF_PORT", 00:24:54.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.375 "hdgst": ${hdgst:-false}, 00:24:54.375 "ddgst": ${ddgst:-false} 00:24:54.375 }, 00:24:54.375 "method": "bdev_nvme_attach_controller" 00:24:54.375 } 00:24:54.375 EOF 00:24:54.375 )") 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:24:54.375 12:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:54.375 "params": { 00:24:54.375 "name": "Nvme1", 00:24:54.375 "trtype": "tcp", 00:24:54.375 "traddr": "10.0.0.3", 00:24:54.375 "adrfam": "ipv4", 00:24:54.375 "trsvcid": "4420", 00:24:54.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:54.375 "hdgst": false, 00:24:54.375 "ddgst": false 00:24:54.375 }, 00:24:54.375 "method": "bdev_nvme_attach_controller" 00:24:54.375 }' 00:24:54.632 [2024-12-05 12:28:25.280116] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:24:54.632 [2024-12-05 12:28:25.280218] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106039 ] 00:24:54.632 [2024-12-05 12:28:25.436452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:54.632 [2024-12-05 12:28:25.496400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.632 [2024-12-05 12:28:25.496516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.632 [2024-12-05 12:28:25.496530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.889 I/O targets: 00:24:54.889 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:54.889 00:24:54.889 00:24:54.889 CUnit - A unit testing framework for C - Version 2.1-3 00:24:54.889 http://cunit.sourceforge.net/ 00:24:54.889 00:24:54.889 00:24:54.889 Suite: bdevio tests on: Nvme1n1 00:24:54.889 Test: blockdev write read block ...passed 00:24:55.147 Test: blockdev write zeroes read block ...passed 00:24:55.147 Test: blockdev write zeroes read no split ...passed 00:24:55.147 Test: blockdev write zeroes read split ...passed 00:24:55.147 Test: blockdev write zeroes read split partial ...passed 00:24:55.147 Test: blockdev reset ...[2024-12-05 12:28:25.836392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:55.147 [2024-12-05 12:28:25.836790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebdf50 (9): Bad file descriptor 00:24:55.147 [2024-12-05 12:28:25.840895] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:24:55.147 passed 00:24:55.147 Test: blockdev write read 8 blocks ...passed 00:24:55.147 Test: blockdev write read size > 128k ...passed 00:24:55.147 Test: blockdev write read invalid size ...passed 00:24:55.147 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:55.147 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:55.147 Test: blockdev write read max offset ...passed 00:24:55.147 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:55.147 Test: blockdev writev readv 8 blocks ...passed 00:24:55.147 Test: blockdev writev readv 30 x 1block ...passed 00:24:55.147 Test: blockdev writev readv block ...passed 00:24:55.147 Test: blockdev writev readv size > 128k ...passed 00:24:55.147 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:55.404 Test: blockdev comparev and writev ...[2024-12-05 12:28:26.015528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:55.404 [2024-12-05 12:28:26.015572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:55.404 [2024-12-05 12:28:26.015608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:55.404 [2024-12-05 12:28:26.015628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:55.404 [2024-12-05 12:28:26.016293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:55.404 [2024-12-05 12:28:26.016357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:55.404 [2024-12-05 12:28:26.016384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:55.404 [2024-12-05 12:28:26.016395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:55.404 [2024-12-05 12:28:26.016974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:55.404 [2024-12-05 12:28:26.017020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:55.404 [2024-12-05 12:28:26.017038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:55.404 [2024-12-05 12:28:26.017049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:55.404 [2024-12-05 12:28:26.017566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:55.404 [2024-12-05 12:28:26.017613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:55.404 [2024-12-05 12:28:26.017645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:55.404 [2024-12-05 12:28:26.017655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:55.404 passed 00:24:55.405 Test: blockdev nvme passthru rw ...passed 00:24:55.405 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:28:26.101616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:55.405 [2024-12-05 12:28:26.101668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:55.405 [2024-12-05 12:28:26.101810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:55.405 [2024-12-05 12:28:26.101842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:55.405 [2024-12-05 12:28:26.101959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:55.405 [2024-12-05 12:28:26.101975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:55.405 [2024-12-05 12:28:26.102090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:55.405 [2024-12-05 12:28:26.102116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:55.405 passed 00:24:55.405 Test: blockdev nvme admin passthru ...passed 00:24:55.405 Test: blockdev copy ...passed 00:24:55.405 00:24:55.405 Run Summary: Type Total Ran Passed Failed Inactive 00:24:55.405 suites 1 1 n/a 0 0 00:24:55.405 tests 23 23 23 0 0 00:24:55.405 asserts 152 152 152 0 n/a 00:24:55.405 00:24:55.405 Elapsed time = 0.958 seconds 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.663 rmmod nvme_tcp 00:24:55.663 rmmod nvme_fabrics 00:24:55.663 rmmod nvme_keyring 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 106003 ']' 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 106003 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 106003 ']' 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 106003 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106003 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106003' 00:24:55.663 killing process with pid 106003 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 106003 00:24:55.663 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 106003 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:55.923 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:24:56.193 00:24:56.193 real 0m2.959s 00:24:56.193 user 0m7.557s 00:24:56.193 sys 0m1.109s 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.193 ************************************ 00:24:56.193 12:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:24:56.193 END TEST nvmf_bdevio 00:24:56.193 ************************************ 00:24:56.193 12:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:56.193 00:24:56.193 real 3m29.683s 00:24:56.193 user 9m24.577s 00:24:56.193 sys 1m16.580s 00:24:56.193 12:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.193 12:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:56.193 ************************************ 00:24:56.193 END TEST nvmf_target_core_interrupt_mode 00:24:56.193 ************************************ 00:24:56.471 12:28:27 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:24:56.471 12:28:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:56.471 12:28:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:56.471 12:28:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:56.471 ************************************ 00:24:56.471 START TEST nvmf_interrupt 00:24:56.471 ************************************ 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:24:56.471 * Looking for test storage... 00:24:56.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:24:56.471 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:56.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.472 --rc genhtml_branch_coverage=1 00:24:56.472 --rc genhtml_function_coverage=1 00:24:56.472 --rc genhtml_legend=1 00:24:56.472 --rc geninfo_all_blocks=1 00:24:56.472 --rc geninfo_unexecuted_blocks=1 00:24:56.472 00:24:56.472 ' 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:56.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.472 --rc genhtml_branch_coverage=1 00:24:56.472 --rc genhtml_function_coverage=1 00:24:56.472 --rc genhtml_legend=1 00:24:56.472 --rc geninfo_all_blocks=1 00:24:56.472 --rc geninfo_unexecuted_blocks=1 00:24:56.472 00:24:56.472 ' 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:56.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.472 --rc genhtml_branch_coverage=1 00:24:56.472 --rc genhtml_function_coverage=1 00:24:56.472 --rc genhtml_legend=1 00:24:56.472 --rc geninfo_all_blocks=1 00:24:56.472 --rc geninfo_unexecuted_blocks=1 00:24:56.472 00:24:56.472 ' 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:56.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.472 --rc genhtml_branch_coverage=1 00:24:56.472 --rc genhtml_function_coverage=1 00:24:56.472 --rc genhtml_legend=1 00:24:56.472 --rc geninfo_all_blocks=1 00:24:56.472 --rc geninfo_unexecuted_blocks=1 00:24:56.472 00:24:56.472 ' 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:56.472 Cannot find device "nvmf_init_br" 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:56.472 Cannot find device "nvmf_init_br2" 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:56.472 Cannot find device "nvmf_tgt_br" 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:24:56.472 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:56.736 Cannot find device "nvmf_tgt_br2" 00:24:56.736 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:24:56.736 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:56.736 Cannot find device "nvmf_init_br" 00:24:56.736 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:24:56.736 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:56.736 Cannot find device "nvmf_init_br2" 00:24:56.736 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:24:56.736 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:56.736 Cannot find device "nvmf_tgt_br" 00:24:56.736 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:24:56.736 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:56.736 Cannot find device "nvmf_tgt_br2" 00:24:56.736 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:56.737 Cannot find device "nvmf_br" 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:56.737 Cannot find device "nvmf_init_if" 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:56.737 Cannot find device "nvmf_init_if2" 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:56.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:56.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:56.737 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:56.995 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:56.995 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:24:56.995 00:24:56.995 --- 10.0.0.3 ping statistics --- 00:24:56.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.995 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:56.995 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:56.995 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:56.995 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:24:56.996 00:24:56.996 --- 10.0.0.4 ping statistics --- 00:24:56.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.996 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:56.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:24:56.996 00:24:56.996 --- 10.0.0.1 ping statistics --- 00:24:56.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.996 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:56.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:24:56.996 00:24:56.996 --- 10.0.0.2 ping statistics --- 00:24:56.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.996 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=106291 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 106291 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 106291 ']' 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.996 12:28:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:56.996 [2024-12-05 12:28:27.805509] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:56.996 [2024-12-05 12:28:27.806761] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:24:56.996 [2024-12-05 12:28:27.806822] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.254 [2024-12-05 12:28:27.952638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:57.254 [2024-12-05 12:28:27.994606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.254 [2024-12-05 12:28:27.994686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.254 [2024-12-05 12:28:27.994697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.254 [2024-12-05 12:28:27.994704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.254 [2024-12-05 12:28:27.994710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.254 [2024-12-05 12:28:27.997336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.254 [2024-12-05 12:28:27.997360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.254 [2024-12-05 12:28:28.084603] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:57.254 [2024-12-05 12:28:28.085105] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:57.254 [2024-12-05 12:28:28.085114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:57.254 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.254 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:24:57.254 12:28:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:57.254 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:57.254 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:24:57.513 5000+0 records in 00:24:57.513 5000+0 records out 00:24:57.513 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0327273 s, 313 MB/s 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:57.513 AIO0 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:57.513 [2024-12-05 12:28:28.254225] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:24:57.513 [2024-12-05 12:28:28.286712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106291 0 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106291 0 idle 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106291 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:24:57.513 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106291 -w 256 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106291 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.25 reactor_0' 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106291 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.25 reactor_0 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106291 1 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106291 1 idle 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106291 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:24:57.772 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:24:57.773 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:24:57.773 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:24:57.773 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106291 -w 256 00:24:57.773 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106295 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.00 reactor_1' 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106295 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.00 reactor_1 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=106345 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106291 0 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106291 0 busy 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106291 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:24:58.031 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106291 -w 256 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106291 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.25 reactor_0' 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106291 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.25 reactor_0 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:24:58.032 12:28:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:24:59.407 12:28:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:24:59.407 12:28:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:24:59.407 12:28:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:24:59.407 12:28:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106291 -w 256 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106291 root 20 0 64.2g 46336 33152 R 99.9 0.4 0:01.61 reactor_0' 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106291 root 20 0 64.2g 46336 33152 R 99.9 0.4 0:01.61 reactor_0 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106291 1 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106291 1 busy 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106291 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106291 -w 256 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106295 root 20 0 64.2g 46336 33152 R 73.3 0.4 0:00.80 reactor_1' 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106295 root 20 0 64.2g 46336 33152 R 73.3 0.4 0:00.80 reactor_1 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:24:59.407 12:28:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 106345 00:25:09.377 Initializing NVMe Controllers 00:25:09.377 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.377 Controller IO queue size 256, less than required. 00:25:09.377 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:09.377 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:25:09.377 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:25:09.377 Initialization complete. Launching workers. 00:25:09.377 ======================================================== 00:25:09.377 Latency(us) 00:25:09.377 Device Information : IOPS MiB/s Average min max 00:25:09.377 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 6023.99 23.53 42577.37 9552.68 77121.07 00:25:09.377 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 5985.29 23.38 42844.87 7465.59 102581.01 00:25:09.377 ======================================================== 00:25:09.377 Total : 12009.29 46.91 42710.69 7465.59 102581.01 00:25:09.377 00:25:09.377 12:28:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:25:09.377 12:28:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106291 0 00:25:09.377 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106291 0 idle 00:25:09.377 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106291 00:25:09.377 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:25:09.377 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:25:09.377 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:25:09.377 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:25:09.377 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:25:09.377 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:25:09.378 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:25:09.378 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:25:09.378 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:25:09.378 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106291 -w 256 00:25:09.378 12:28:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106291 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:13.85 reactor_0' 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106291 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:13.85 reactor_0 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106291 1 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106291 1 idle 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106291 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106291 -w 256 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106295 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:06.80 reactor_1' 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106295 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:06.80 reactor_1 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:09.378 12:28:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:25:10.753 12:28:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106291 0 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106291 0 idle 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106291 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106291 -w 256 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106291 root 20 0 64.2g 48640 33152 S 6.7 0.4 0:13.92 reactor_0' 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106291 root 20 0 64.2g 48640 33152 S 6.7 0.4 0:13.92 reactor_0 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106291 1 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106291 1 idle 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106291 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106291 -w 256 00:25:10.754 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106295 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:06.81 reactor_1' 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106295 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:06.81 reactor_1 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:11.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:11.012 12:28:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:25:11.270 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:11.270 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:25:11.270 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:11.270 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:11.270 rmmod nvme_tcp 00:25:11.270 rmmod nvme_fabrics 00:25:11.270 rmmod nvme_keyring 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 106291 ']' 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 106291 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 106291 ']' 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 106291 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106291 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:11.529 killing process with pid 106291 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106291' 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 106291 00:25:11.529 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 106291 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:11.787 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:11.788 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:11.788 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:11.788 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:12.046 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:12.046 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:12.046 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:12.046 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:12.046 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:12.046 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.046 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.046 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.046 12:28:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:25:12.046 00:25:12.046 real 0m15.707s 00:25:12.046 user 0m26.478s 00:25:12.046 sys 0m9.263s 00:25:12.046 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.046 ************************************ 00:25:12.046 END TEST nvmf_interrupt 00:25:12.046 ************************************ 00:25:12.046 12:28:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:25:12.046 00:25:12.046 real 19m48.609s 00:25:12.046 user 51m55.681s 00:25:12.046 sys 5m2.064s 00:25:12.046 12:28:42 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.046 ************************************ 00:25:12.046 END TEST nvmf_tcp 00:25:12.046 12:28:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.046 ************************************ 00:25:12.046 12:28:42 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:25:12.046 12:28:42 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:12.046 12:28:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:12.046 12:28:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.046 12:28:42 -- common/autotest_common.sh@10 -- # set +x 00:25:12.046 ************************************ 00:25:12.046 START TEST spdkcli_nvmf_tcp 00:25:12.046 ************************************ 00:25:12.046 12:28:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:12.304 * Looking for test storage... 00:25:12.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:12.305 12:28:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:12.305 12:28:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:25:12.305 12:28:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:12.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.305 --rc genhtml_branch_coverage=1 00:25:12.305 --rc genhtml_function_coverage=1 00:25:12.305 --rc genhtml_legend=1 00:25:12.305 --rc geninfo_all_blocks=1 00:25:12.305 --rc geninfo_unexecuted_blocks=1 00:25:12.305 00:25:12.305 ' 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:12.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.305 --rc genhtml_branch_coverage=1 00:25:12.305 --rc genhtml_function_coverage=1 00:25:12.305 --rc genhtml_legend=1 00:25:12.305 --rc geninfo_all_blocks=1 00:25:12.305 --rc geninfo_unexecuted_blocks=1 00:25:12.305 00:25:12.305 ' 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:12.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.305 --rc genhtml_branch_coverage=1 00:25:12.305 --rc genhtml_function_coverage=1 00:25:12.305 --rc genhtml_legend=1 00:25:12.305 --rc geninfo_all_blocks=1 00:25:12.305 --rc geninfo_unexecuted_blocks=1 00:25:12.305 00:25:12.305 ' 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:12.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.305 --rc genhtml_branch_coverage=1 00:25:12.305 --rc genhtml_function_coverage=1 00:25:12.305 --rc genhtml_legend=1 00:25:12.305 --rc geninfo_all_blocks=1 00:25:12.305 --rc geninfo_unexecuted_blocks=1 00:25:12.305 00:25:12.305 ' 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:12.305 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:12.305 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=106687 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 106687 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 106687 ']' 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.306 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.564 [2024-12-05 12:28:43.187090] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:25:12.564 [2024-12-05 12:28:43.187184] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106687 ] 00:25:12.564 [2024-12-05 12:28:43.340503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:12.564 [2024-12-05 12:28:43.411352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.564 [2024-12-05 12:28:43.411374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.823 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.823 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:25:12.823 12:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:12.823 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:12.823 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.823 12:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:12.823 12:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:12.823 12:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:12.823 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:12.823 12:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:12.823 12:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:12.823 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:12.823 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:12.823 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:12.823 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:12.823 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:12.823 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:12.823 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:12.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:12.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:12.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:12.823 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:12.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:12.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:12.823 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:12.823 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:12.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:12.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:12.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:12.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:12.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:12.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:12.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:12.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:12.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:12.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:12.824 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:12.824 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:12.824 ' 00:25:16.106 [2024-12-05 12:28:46.392414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.040 [2024-12-05 12:28:47.718094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:19.568 [2024-12-05 12:28:50.169074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:21.469 [2024-12-05 12:28:52.287623] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:23.368 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:23.368 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:23.368 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:23.368 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:23.368 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:23.368 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:23.368 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:23.368 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:23.368 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:23.368 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:23.368 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:23.368 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:23.368 12:28:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:23.368 12:28:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:23.368 12:28:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.368 12:28:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:23.368 12:28:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:23.368 12:28:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.368 12:28:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:23.368 12:28:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:23.627 12:28:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:23.885 12:28:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:23.885 12:28:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:23.885 12:28:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:23.885 12:28:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.885 12:28:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:23.885 12:28:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:23.885 12:28:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:23.885 12:28:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:23.885 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:23.885 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:23.885 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:23.885 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:23.885 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:23.885 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:23.885 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:23.885 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:23.885 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:23.885 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:23.885 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:23.885 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:23.885 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:23.885 ' 00:25:30.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:30.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:30.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:30.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:30.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:30.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:30.472 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:30.472 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:30.472 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:30.472 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:30.472 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:30.472 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:30.472 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:30.472 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 106687 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 106687 ']' 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 106687 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106687 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106687' 00:25:30.472 killing process with pid 106687 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 106687 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 106687 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 106687 ']' 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 106687 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 106687 ']' 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 106687 00:25:30.472 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (106687) - No such process 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 106687 is not found' 00:25:30.472 Process with pid 106687 is not found 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:30.472 00:25:30.472 real 0m17.609s 00:25:30.472 user 0m38.184s 00:25:30.472 sys 0m0.885s 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:30.472 12:29:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.472 ************************************ 00:25:30.472 END TEST spdkcli_nvmf_tcp 00:25:30.472 ************************************ 00:25:30.472 12:29:00 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:30.472 12:29:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:30.472 12:29:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:30.472 12:29:00 -- common/autotest_common.sh@10 -- # set +x 00:25:30.472 ************************************ 00:25:30.472 START TEST nvmf_identify_passthru 00:25:30.472 ************************************ 00:25:30.472 12:29:00 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:30.472 * Looking for test storage... 00:25:30.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:30.472 12:29:00 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:30.472 12:29:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:30.472 12:29:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:25:30.472 12:29:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.472 12:29:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:25:30.472 12:29:00 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.472 12:29:00 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:30.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.473 --rc genhtml_branch_coverage=1 00:25:30.473 --rc genhtml_function_coverage=1 00:25:30.473 --rc genhtml_legend=1 00:25:30.473 --rc geninfo_all_blocks=1 00:25:30.473 --rc geninfo_unexecuted_blocks=1 00:25:30.473 00:25:30.473 ' 00:25:30.473 12:29:00 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:30.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.473 --rc genhtml_branch_coverage=1 00:25:30.473 --rc genhtml_function_coverage=1 00:25:30.473 --rc genhtml_legend=1 00:25:30.473 --rc geninfo_all_blocks=1 00:25:30.473 --rc geninfo_unexecuted_blocks=1 00:25:30.473 00:25:30.473 ' 00:25:30.473 12:29:00 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:30.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.473 --rc genhtml_branch_coverage=1 00:25:30.473 --rc genhtml_function_coverage=1 00:25:30.473 --rc genhtml_legend=1 00:25:30.473 --rc geninfo_all_blocks=1 00:25:30.473 --rc geninfo_unexecuted_blocks=1 00:25:30.473 00:25:30.473 ' 00:25:30.473 12:29:00 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:30.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.473 --rc genhtml_branch_coverage=1 00:25:30.473 --rc genhtml_function_coverage=1 00:25:30.473 --rc genhtml_legend=1 00:25:30.473 --rc geninfo_all_blocks=1 00:25:30.473 --rc geninfo_unexecuted_blocks=1 00:25:30.473 00:25:30.473 ' 00:25:30.473 12:29:00 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:30.473 12:29:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.473 12:29:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.473 12:29:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.473 12:29:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.473 12:29:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.473 12:29:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.473 12:29:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.473 12:29:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:30.473 12:29:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.473 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.473 12:29:00 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:30.473 12:29:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.473 12:29:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.473 12:29:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.473 12:29:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.473 12:29:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.473 12:29:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.473 12:29:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.473 12:29:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:30.473 12:29:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.473 12:29:00 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.473 12:29:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:30.473 12:29:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:30.473 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:30.474 Cannot find device "nvmf_init_br" 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:30.474 Cannot find device "nvmf_init_br2" 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:30.474 Cannot find device "nvmf_tgt_br" 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:30.474 Cannot find device "nvmf_tgt_br2" 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:30.474 Cannot find device "nvmf_init_br" 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:30.474 Cannot find device "nvmf_init_br2" 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:30.474 Cannot find device "nvmf_tgt_br" 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:30.474 Cannot find device "nvmf_tgt_br2" 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:30.474 Cannot find device "nvmf_br" 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:30.474 Cannot find device "nvmf_init_if" 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:30.474 Cannot find device "nvmf_init_if2" 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:30.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:30.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:30.474 12:29:00 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:30.474 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:30.474 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:25:30.474 00:25:30.474 --- 10.0.0.3 ping statistics --- 00:25:30.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.474 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:30.474 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:30.474 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:25:30.474 00:25:30.474 --- 10.0.0.4 ping statistics --- 00:25:30.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.474 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:30.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:25:30.474 00:25:30.474 --- 10.0.0.1 ping statistics --- 00:25:30.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.474 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:30.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:25:30.474 00:25:30.474 --- 10.0.0.2 ping statistics --- 00:25:30.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.474 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:30.474 12:29:01 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:30.474 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:30.474 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:30.474 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:25:30.474 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:25:30.474 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:25:30.474 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:30.474 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:25:30.474 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:30.733 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:30.733 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:25:30.733 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:30.733 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:30.991 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:30.991 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:30.991 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.991 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:30.991 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:30.991 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.991 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:30.991 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=107193 00:25:30.991 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:30.991 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:30.991 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 107193 00:25:30.991 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 107193 ']' 00:25:30.991 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.991 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.991 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.991 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.991 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:30.991 [2024-12-05 12:29:01.731939] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:25:30.992 [2024-12-05 12:29:01.732046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.255 [2024-12-05 12:29:01.879496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.255 [2024-12-05 12:29:01.924941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.255 [2024-12-05 12:29:01.925023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.255 [2024-12-05 12:29:01.925050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.255 [2024-12-05 12:29:01.925058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.255 [2024-12-05 12:29:01.925064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.255 [2024-12-05 12:29:01.926248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.255 [2024-12-05 12:29:01.926407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.255 [2024-12-05 12:29:01.926481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.255 [2024-12-05 12:29:01.926483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.255 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.255 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:25:31.255 12:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:31.255 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.255 12:29:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.255 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.255 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:31.255 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.255 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.255 [2024-12-05 12:29:02.107387] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:31.255 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.255 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.255 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.255 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.516 [2024-12-05 12:29:02.121399] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.516 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.516 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.516 Nvme0n1 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.516 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.516 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.516 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.516 [2024-12-05 12:29:02.273512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.516 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:31.516 [ 00:25:31.516 { 00:25:31.516 "allow_any_host": true, 00:25:31.516 "hosts": [], 00:25:31.516 "listen_addresses": [], 00:25:31.516 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:31.516 "subtype": "Discovery" 00:25:31.516 }, 00:25:31.516 { 00:25:31.516 "allow_any_host": true, 00:25:31.516 "hosts": [], 00:25:31.516 "listen_addresses": [ 00:25:31.516 { 00:25:31.516 "adrfam": "IPv4", 00:25:31.516 "traddr": "10.0.0.3", 00:25:31.516 "trsvcid": "4420", 00:25:31.516 "trtype": "TCP" 00:25:31.516 } 00:25:31.516 ], 00:25:31.516 "max_cntlid": 65519, 00:25:31.516 "max_namespaces": 1, 00:25:31.516 "min_cntlid": 1, 00:25:31.516 "model_number": "SPDK bdev Controller", 00:25:31.516 "namespaces": [ 00:25:31.516 { 00:25:31.516 "bdev_name": "Nvme0n1", 00:25:31.516 "name": "Nvme0n1", 00:25:31.516 "nguid": "928DBC05760746D68BC6477029111EE1", 00:25:31.516 "nsid": 1, 00:25:31.516 "uuid": "928dbc05-7607-46d6-8bc6-477029111ee1" 00:25:31.516 } 00:25:31.516 ], 00:25:31.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.516 "serial_number": "SPDK00000000000001", 00:25:31.516 "subtype": "NVMe" 00:25:31.516 } 00:25:31.516 ] 00:25:31.516 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.516 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:31.516 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:31.516 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:31.774 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:31.774 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:31.774 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:31.774 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:32.032 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:32.032 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:32.032 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:32.032 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.032 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.032 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:32.032 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.032 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:32.032 12:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:32.032 12:29:02 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:32.032 12:29:02 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:25:32.032 12:29:02 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:32.032 12:29:02 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:25:32.032 12:29:02 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:32.032 12:29:02 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:32.032 rmmod nvme_tcp 00:25:32.032 rmmod nvme_fabrics 00:25:32.032 rmmod nvme_keyring 00:25:32.291 12:29:02 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:32.291 12:29:02 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:25:32.291 12:29:02 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:25:32.291 12:29:02 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 107193 ']' 00:25:32.291 12:29:02 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 107193 00:25:32.291 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 107193 ']' 00:25:32.291 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 107193 00:25:32.291 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:25:32.291 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.291 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107193 00:25:32.291 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:32.291 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:32.291 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107193' 00:25:32.291 killing process with pid 107193 00:25:32.291 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 107193 00:25:32.291 12:29:02 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 107193 00:25:32.291 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:32.291 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:32.291 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:32.291 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:25:32.291 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:25:32.291 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:32.291 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:25:32.291 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:32.291 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:32.291 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.551 12:29:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:32.551 12:29:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.551 12:29:03 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:25:32.551 ************************************ 00:25:32.551 END TEST nvmf_identify_passthru 00:25:32.551 ************************************ 00:25:32.551 00:25:32.551 real 0m2.836s 00:25:32.551 user 0m5.292s 00:25:32.551 sys 0m0.968s 00:25:32.551 12:29:03 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:32.551 12:29:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:32.551 12:29:03 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:32.551 12:29:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:32.551 12:29:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.551 12:29:03 -- common/autotest_common.sh@10 -- # set +x 00:25:32.810 ************************************ 00:25:32.810 START TEST nvmf_dif 00:25:32.810 ************************************ 00:25:32.810 12:29:03 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:32.810 * Looking for test storage... 00:25:32.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:32.810 12:29:03 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:32.810 12:29:03 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:32.810 12:29:03 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:25:32.810 12:29:03 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:32.810 12:29:03 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:25:32.810 12:29:03 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.810 12:29:03 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:32.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.810 --rc genhtml_branch_coverage=1 00:25:32.810 --rc genhtml_function_coverage=1 00:25:32.810 --rc genhtml_legend=1 00:25:32.810 --rc geninfo_all_blocks=1 00:25:32.810 --rc geninfo_unexecuted_blocks=1 00:25:32.810 00:25:32.810 ' 00:25:32.810 12:29:03 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:32.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.810 --rc genhtml_branch_coverage=1 00:25:32.810 --rc genhtml_function_coverage=1 00:25:32.810 --rc genhtml_legend=1 00:25:32.810 --rc geninfo_all_blocks=1 00:25:32.810 --rc geninfo_unexecuted_blocks=1 00:25:32.810 00:25:32.810 ' 00:25:32.810 12:29:03 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:32.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.810 --rc genhtml_branch_coverage=1 00:25:32.810 --rc genhtml_function_coverage=1 00:25:32.810 --rc genhtml_legend=1 00:25:32.810 --rc geninfo_all_blocks=1 00:25:32.810 --rc geninfo_unexecuted_blocks=1 00:25:32.810 00:25:32.810 ' 00:25:32.810 12:29:03 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:32.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.810 --rc genhtml_branch_coverage=1 00:25:32.810 --rc genhtml_function_coverage=1 00:25:32.810 --rc genhtml_legend=1 00:25:32.810 --rc geninfo_all_blocks=1 00:25:32.810 --rc geninfo_unexecuted_blocks=1 00:25:32.810 00:25:32.810 ' 00:25:32.810 12:29:03 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:32.810 12:29:03 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:32.810 12:29:03 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.810 12:29:03 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.810 12:29:03 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.810 12:29:03 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.810 12:29:03 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:32.811 12:29:03 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:25:32.811 12:29:03 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.811 12:29:03 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.811 12:29:03 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.811 12:29:03 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.811 12:29:03 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.811 12:29:03 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.811 12:29:03 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:32.811 12:29:03 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:32.811 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:32.811 12:29:03 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:32.811 12:29:03 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:32.811 12:29:03 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:32.811 12:29:03 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:32.811 12:29:03 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.811 12:29:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:32.811 12:29:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:32.811 Cannot find device "nvmf_init_br" 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@162 -- # true 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:32.811 Cannot find device "nvmf_init_br2" 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@163 -- # true 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:32.811 Cannot find device "nvmf_tgt_br" 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@164 -- # true 00:25:32.811 12:29:03 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:33.069 Cannot find device "nvmf_tgt_br2" 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@165 -- # true 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:33.069 Cannot find device "nvmf_init_br" 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@166 -- # true 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:33.069 Cannot find device "nvmf_init_br2" 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@167 -- # true 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:33.069 Cannot find device "nvmf_tgt_br" 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@168 -- # true 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:33.069 Cannot find device "nvmf_tgt_br2" 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@169 -- # true 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:33.069 Cannot find device "nvmf_br" 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@170 -- # true 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:33.069 Cannot find device "nvmf_init_if" 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@171 -- # true 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:33.069 Cannot find device "nvmf_init_if2" 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@172 -- # true 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:33.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@173 -- # true 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:33.069 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@174 -- # true 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:33.069 12:29:03 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:33.328 12:29:03 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:33.328 12:29:03 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:33.328 12:29:03 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:33.328 12:29:03 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:33.328 12:29:03 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:33.328 12:29:03 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:33.328 12:29:03 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:33.328 12:29:03 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:33.328 12:29:03 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:33.328 12:29:03 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:33.328 12:29:03 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:33.328 12:29:03 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:33.328 12:29:04 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:33.328 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:33.328 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:25:33.328 00:25:33.328 --- 10.0.0.3 ping statistics --- 00:25:33.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.328 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:25:33.328 12:29:04 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:33.328 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:33.328 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:25:33.328 00:25:33.328 --- 10.0.0.4 ping statistics --- 00:25:33.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.328 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:33.328 12:29:04 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:33.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:25:33.328 00:25:33.328 --- 10.0.0.1 ping statistics --- 00:25:33.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.328 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:25:33.328 12:29:04 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:33.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:25:33.328 00:25:33.328 --- 10.0.0.2 ping statistics --- 00:25:33.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.328 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:25:33.328 12:29:04 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.328 12:29:04 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:25:33.328 12:29:04 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:25:33.328 12:29:04 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:33.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:33.586 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:33.586 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:33.586 12:29:04 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.586 12:29:04 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:33.586 12:29:04 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:33.586 12:29:04 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.586 12:29:04 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:33.586 12:29:04 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:33.586 12:29:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:33.586 12:29:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:33.586 12:29:04 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:33.586 12:29:04 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:33.586 12:29:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:33.586 12:29:04 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=107572 00:25:33.586 12:29:04 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:33.586 12:29:04 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 107572 00:25:33.586 12:29:04 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 107572 ']' 00:25:33.586 12:29:04 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.586 12:29:04 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.586 12:29:04 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.586 12:29:04 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.587 12:29:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:33.845 [2024-12-05 12:29:04.518918] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:25:33.845 [2024-12-05 12:29:04.519592] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.845 [2024-12-05 12:29:04.672245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.104 [2024-12-05 12:29:04.732129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.104 [2024-12-05 12:29:04.732194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.104 [2024-12-05 12:29:04.732210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.104 [2024-12-05 12:29:04.732221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.104 [2024-12-05 12:29:04.732230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.104 [2024-12-05 12:29:04.732716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.104 12:29:04 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:34.104 12:29:04 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:25:34.104 12:29:04 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:34.104 12:29:04 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:34.104 12:29:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:34.104 12:29:04 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.104 12:29:04 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:34.104 12:29:04 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:34.104 12:29:04 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.104 12:29:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:34.104 [2024-12-05 12:29:04.936433] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.104 12:29:04 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.104 12:29:04 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:34.104 12:29:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:34.104 12:29:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:34.104 12:29:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:34.104 ************************************ 00:25:34.104 START TEST fio_dif_1_default 00:25:34.104 ************************************ 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:34.104 bdev_null0 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.104 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:34.363 [2024-12-05 12:29:04.984522] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:34.363 { 00:25:34.363 "params": { 00:25:34.363 "name": "Nvme$subsystem", 00:25:34.363 "trtype": "$TEST_TRANSPORT", 00:25:34.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.363 "adrfam": "ipv4", 00:25:34.363 "trsvcid": "$NVMF_PORT", 00:25:34.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.363 "hdgst": ${hdgst:-false}, 00:25:34.363 "ddgst": ${ddgst:-false} 00:25:34.363 }, 00:25:34.363 "method": "bdev_nvme_attach_controller" 00:25:34.363 } 00:25:34.363 EOF 00:25:34.363 )") 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:25:34.363 12:29:04 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:25:34.363 12:29:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:34.363 "params": { 00:25:34.363 "name": "Nvme0", 00:25:34.363 "trtype": "tcp", 00:25:34.363 "traddr": "10.0.0.3", 00:25:34.363 "adrfam": "ipv4", 00:25:34.363 "trsvcid": "4420", 00:25:34.363 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:34.363 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:34.364 "hdgst": false, 00:25:34.364 "ddgst": false 00:25:34.364 }, 00:25:34.364 "method": "bdev_nvme_attach_controller" 00:25:34.364 }' 00:25:34.364 12:29:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:34.364 12:29:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:34.364 12:29:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:34.364 12:29:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:34.364 12:29:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:34.364 12:29:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:34.364 12:29:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:34.364 12:29:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:34.364 12:29:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:34.364 12:29:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:34.364 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:34.364 fio-3.35 00:25:34.364 Starting 1 thread 00:25:46.630 00:25:46.630 filename0: (groupid=0, jobs=1): err= 0: pid=107649: Thu Dec 5 12:29:15 2024 00:25:46.630 read: IOPS=3411, BW=13.3MiB/s (14.0MB/s)(134MiB/10034msec) 00:25:46.630 slat (nsec): min=6006, max=76695, avg=7378.02, stdev=2741.81 00:25:46.630 clat (usec): min=362, max=41517, avg=1149.93, stdev=5451.24 00:25:46.630 lat (usec): min=369, max=41526, avg=1157.31, stdev=5451.38 00:25:46.630 clat percentiles (usec): 00:25:46.630 | 1.00th=[ 371], 5.00th=[ 375], 10.00th=[ 375], 20.00th=[ 383], 00:25:46.630 | 30.00th=[ 388], 40.00th=[ 392], 50.00th=[ 396], 60.00th=[ 400], 00:25:46.630 | 70.00th=[ 408], 80.00th=[ 416], 90.00th=[ 441], 95.00th=[ 562], 00:25:46.630 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:25:46.630 | 99.99th=[41681] 00:25:46.630 bw ( KiB/s): min= 2880, max=21024, per=100.00%, avg=13692.80, stdev=4360.44, samples=20 00:25:46.630 iops : min= 720, max= 5256, avg=3423.20, stdev=1090.11, samples=20 00:25:46.630 lat (usec) : 500=94.05%, 750=4.08%, 1000=0.02% 00:25:46.630 lat (msec) : 4=0.01%, 50=1.83% 00:25:46.630 cpu : usr=88.93%, sys=9.30%, ctx=190, majf=0, minf=9 00:25:46.630 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:46.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:46.630 issued rwts: total=34236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:46.630 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:46.630 00:25:46.630 Run status group 0 (all jobs): 00:25:46.630 READ: bw=13.3MiB/s (14.0MB/s), 13.3MiB/s-13.3MiB/s (14.0MB/s-14.0MB/s), io=134MiB (140MB), run=10034-10034msec 00:25:46.630 12:29:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:46.630 12:29:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:46.630 12:29:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:46.630 12:29:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:46.630 12:29:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:46.630 12:29:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:46.630 12:29:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.630 12:29:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:46.630 12:29:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.630 12:29:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:46.630 12:29:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.630 12:29:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:46.630 ************************************ 00:25:46.630 END TEST fio_dif_1_default 00:25:46.630 ************************************ 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.631 00:25:46.631 real 0m11.117s 00:25:46.631 user 0m9.620s 00:25:46.631 sys 0m1.221s 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:46.631 12:29:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:46.631 12:29:16 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:46.631 12:29:16 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:46.631 12:29:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:46.631 ************************************ 00:25:46.631 START TEST fio_dif_1_multi_subsystems 00:25:46.631 ************************************ 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:46.631 bdev_null0 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:46.631 [2024-12-05 12:29:16.157851] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:46.631 bdev_null1 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.631 { 00:25:46.631 "params": { 00:25:46.631 "name": "Nvme$subsystem", 00:25:46.631 "trtype": "$TEST_TRANSPORT", 00:25:46.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.631 "adrfam": "ipv4", 00:25:46.631 "trsvcid": "$NVMF_PORT", 00:25:46.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.631 "hdgst": ${hdgst:-false}, 00:25:46.631 "ddgst": ${ddgst:-false} 00:25:46.631 }, 00:25:46.631 "method": "bdev_nvme_attach_controller" 00:25:46.631 } 00:25:46.631 EOF 00:25:46.631 )") 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:46.631 { 00:25:46.631 "params": { 00:25:46.631 "name": "Nvme$subsystem", 00:25:46.631 "trtype": "$TEST_TRANSPORT", 00:25:46.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.631 "adrfam": "ipv4", 00:25:46.631 "trsvcid": "$NVMF_PORT", 00:25:46.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.631 "hdgst": ${hdgst:-false}, 00:25:46.631 "ddgst": ${ddgst:-false} 00:25:46.631 }, 00:25:46.631 "method": "bdev_nvme_attach_controller" 00:25:46.631 } 00:25:46.631 EOF 00:25:46.631 )") 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:46.631 "params": { 00:25:46.631 "name": "Nvme0", 00:25:46.631 "trtype": "tcp", 00:25:46.631 "traddr": "10.0.0.3", 00:25:46.631 "adrfam": "ipv4", 00:25:46.631 "trsvcid": "4420", 00:25:46.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:46.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:46.631 "hdgst": false, 00:25:46.631 "ddgst": false 00:25:46.631 }, 00:25:46.631 "method": "bdev_nvme_attach_controller" 00:25:46.631 },{ 00:25:46.631 "params": { 00:25:46.631 "name": "Nvme1", 00:25:46.631 "trtype": "tcp", 00:25:46.631 "traddr": "10.0.0.3", 00:25:46.631 "adrfam": "ipv4", 00:25:46.631 "trsvcid": "4420", 00:25:46.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.631 "hdgst": false, 00:25:46.631 "ddgst": false 00:25:46.631 }, 00:25:46.631 "method": "bdev_nvme_attach_controller" 00:25:46.631 }' 00:25:46.631 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:46.632 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:46.632 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.632 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:46.632 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:46.632 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:46.632 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:46.632 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:46.632 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:46.632 12:29:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:46.632 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:46.632 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:46.632 fio-3.35 00:25:46.632 Starting 2 threads 00:25:56.600 00:25:56.600 filename0: (groupid=0, jobs=1): err= 0: pid=107810: Thu Dec 5 12:29:27 2024 00:25:56.600 read: IOPS=246, BW=985KiB/s (1009kB/s)(9888KiB/10039msec) 00:25:56.600 slat (nsec): min=6059, max=77489, avg=9468.22, stdev=5391.89 00:25:56.600 clat (usec): min=356, max=41595, avg=16215.56, stdev=19740.61 00:25:56.600 lat (usec): min=363, max=41605, avg=16225.03, stdev=19740.48 00:25:56.600 clat percentiles (usec): 00:25:56.600 | 1.00th=[ 363], 5.00th=[ 375], 10.00th=[ 388], 20.00th=[ 404], 00:25:56.600 | 30.00th=[ 420], 40.00th=[ 437], 50.00th=[ 461], 60.00th=[ 717], 00:25:56.600 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:56.600 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:25:56.600 | 99.99th=[41681] 00:25:56.600 bw ( KiB/s): min= 480, max= 1280, per=57.77%, avg=987.20, stdev=217.35, samples=20 00:25:56.600 iops : min= 120, max= 320, avg=246.80, stdev=54.34, samples=20 00:25:56.600 lat (usec) : 500=56.84%, 750=3.64%, 1000=0.36% 00:25:56.600 lat (msec) : 2=0.16%, 50=39.00% 00:25:56.600 cpu : usr=97.29%, sys=2.21%, ctx=8, majf=0, minf=0 00:25:56.600 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:56.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.600 issued rwts: total=2472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.600 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:56.600 filename1: (groupid=0, jobs=1): err= 0: pid=107811: Thu Dec 5 12:29:27 2024 00:25:56.600 read: IOPS=181, BW=726KiB/s (743kB/s)(7264KiB/10006msec) 00:25:56.600 slat (nsec): min=6102, max=44188, avg=8819.93, stdev=4722.59 00:25:56.600 clat (usec): min=359, max=42430, avg=22010.37, stdev=20191.98 00:25:56.600 lat (usec): min=365, max=42440, avg=22019.19, stdev=20192.06 00:25:56.600 clat percentiles (usec): 00:25:56.600 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 412], 00:25:56.600 | 30.00th=[ 437], 40.00th=[ 474], 50.00th=[40633], 60.00th=[40633], 00:25:56.600 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:56.600 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:25:56.600 | 99.99th=[42206] 00:25:56.600 bw ( KiB/s): min= 576, max= 1024, per=42.67%, avg=729.26, stdev=128.26, samples=19 00:25:56.600 iops : min= 144, max= 256, avg=182.32, stdev=32.06, samples=19 00:25:56.600 lat (usec) : 500=42.40%, 750=3.47%, 1000=0.61% 00:25:56.600 lat (msec) : 2=0.22%, 50=53.30% 00:25:56.600 cpu : usr=97.14%, sys=2.44%, ctx=22, majf=0, minf=0 00:25:56.600 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:56.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.600 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.600 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:56.600 00:25:56.600 Run status group 0 (all jobs): 00:25:56.600 READ: bw=1709KiB/s (1750kB/s), 726KiB/s-985KiB/s (743kB/s-1009kB/s), io=16.8MiB (17.6MB), run=10006-10039msec 00:25:56.600 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:56.600 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:56.600 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:56.600 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:56.601 ************************************ 00:25:56.601 END TEST fio_dif_1_multi_subsystems 00:25:56.601 ************************************ 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.601 00:25:56.601 real 0m11.278s 00:25:56.601 user 0m20.354s 00:25:56.601 sys 0m0.775s 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.601 12:29:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:56.601 12:29:27 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:56.601 12:29:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:56.601 12:29:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.601 12:29:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:56.601 ************************************ 00:25:56.601 START TEST fio_dif_rand_params 00:25:56.601 ************************************ 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.601 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:56.859 bdev_null0 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:56.859 [2024-12-05 12:29:27.494397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:56.859 { 00:25:56.859 "params": { 00:25:56.859 "name": "Nvme$subsystem", 00:25:56.859 "trtype": "$TEST_TRANSPORT", 00:25:56.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.859 "adrfam": "ipv4", 00:25:56.859 "trsvcid": "$NVMF_PORT", 00:25:56.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.859 "hdgst": ${hdgst:-false}, 00:25:56.859 "ddgst": ${ddgst:-false} 00:25:56.859 }, 00:25:56.859 "method": "bdev_nvme_attach_controller" 00:25:56.859 } 00:25:56.859 EOF 00:25:56.859 )") 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:56.859 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:56.860 "params": { 00:25:56.860 "name": "Nvme0", 00:25:56.860 "trtype": "tcp", 00:25:56.860 "traddr": "10.0.0.3", 00:25:56.860 "adrfam": "ipv4", 00:25:56.860 "trsvcid": "4420", 00:25:56.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:56.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:56.860 "hdgst": false, 00:25:56.860 "ddgst": false 00:25:56.860 }, 00:25:56.860 "method": "bdev_nvme_attach_controller" 00:25:56.860 }' 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:56.860 12:29:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.118 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:57.118 ... 00:25:57.118 fio-3.35 00:25:57.118 Starting 3 threads 00:26:03.681 00:26:03.681 filename0: (groupid=0, jobs=1): err= 0: pid=107968: Thu Dec 5 12:29:33 2024 00:26:03.681 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(132MiB/5001msec) 00:26:03.681 slat (nsec): min=6113, max=68596, avg=13700.82, stdev=6575.91 00:26:03.681 clat (usec): min=5312, max=77001, avg=14160.38, stdev=13802.12 00:26:03.681 lat (usec): min=5323, max=77023, avg=14174.08, stdev=13802.51 00:26:03.681 clat percentiles (usec): 00:26:03.681 | 1.00th=[ 5407], 5.00th=[ 6128], 10.00th=[ 6456], 20.00th=[ 6783], 00:26:03.681 | 30.00th=[ 7111], 40.00th=[ 8586], 50.00th=[10159], 60.00th=[10683], 00:26:03.681 | 70.00th=[11207], 80.00th=[11863], 90.00th=[47449], 95.00th=[50594], 00:26:03.681 | 99.00th=[52167], 99.50th=[52167], 99.90th=[77071], 99.95th=[77071], 00:26:03.681 | 99.99th=[77071] 00:26:03.681 bw ( KiB/s): min=21248, max=36608, per=26.20%, avg=26851.56, stdev=4416.36, samples=9 00:26:03.681 iops : min= 166, max= 286, avg=209.78, stdev=34.50, samples=9 00:26:03.681 lat (msec) : 10=48.58%, 20=38.85%, 50=6.14%, 100=6.43% 00:26:03.681 cpu : usr=94.88%, sys=3.90%, ctx=10, majf=0, minf=0 00:26:03.681 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.681 issued rwts: total=1058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.681 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:03.681 filename0: (groupid=0, jobs=1): err= 0: pid=107969: Thu Dec 5 12:29:33 2024 00:26:03.681 read: IOPS=358, BW=44.8MiB/s (46.9MB/s)(224MiB/5001msec) 00:26:03.681 slat (nsec): min=6052, max=86707, avg=10777.92, stdev=6224.19 00:26:03.681 clat (usec): min=3406, max=52598, avg=8353.47, stdev=4641.49 00:26:03.681 lat (usec): min=3415, max=52617, avg=8364.25, stdev=4642.10 00:26:03.681 clat percentiles (usec): 00:26:03.681 | 1.00th=[ 3589], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 4047], 00:26:03.681 | 30.00th=[ 7046], 40.00th=[ 7373], 50.00th=[ 7767], 60.00th=[ 8291], 00:26:03.681 | 70.00th=[ 9765], 80.00th=[11600], 90.00th=[12387], 95.00th=[13042], 00:26:03.681 | 99.00th=[16909], 99.50th=[48497], 99.90th=[52691], 99.95th=[52691], 00:26:03.681 | 99.99th=[52691] 00:26:03.681 bw ( KiB/s): min=33792, max=54528, per=44.71%, avg=45824.00, stdev=6464.95, samples=9 00:26:03.681 iops : min= 264, max= 426, avg=358.00, stdev=50.51, samples=9 00:26:03.681 lat (msec) : 4=19.10%, 10=52.15%, 20=27.92%, 50=0.45%, 100=0.39% 00:26:03.681 cpu : usr=92.50%, sys=5.38%, ctx=19, majf=0, minf=0 00:26:03.681 IO depths : 1=24.6%, 2=75.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.681 issued rwts: total=1791,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.681 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:03.681 filename0: (groupid=0, jobs=1): err= 0: pid=107970: Thu Dec 5 12:29:33 2024 00:26:03.681 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(145MiB/5002msec) 00:26:03.681 slat (nsec): min=6332, max=69200, avg=14565.91, stdev=6715.14 00:26:03.681 clat (usec): min=2946, max=52617, avg=12958.05, stdev=13011.59 00:26:03.681 lat (usec): min=2958, max=52625, avg=12972.62, stdev=13011.50 00:26:03.681 clat percentiles (usec): 00:26:03.681 | 1.00th=[ 5473], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 6915], 00:26:03.681 | 30.00th=[ 7242], 40.00th=[ 7898], 50.00th=[ 8717], 60.00th=[ 9110], 00:26:03.681 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[46924], 95.00th=[48497], 00:26:03.681 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[52691], 00:26:03.681 | 99.99th=[52691] 00:26:03.681 bw ( KiB/s): min=24576, max=37376, per=29.25%, avg=29980.44, stdev=3810.97, samples=9 00:26:03.681 iops : min= 192, max= 292, avg=234.22, stdev=29.77, samples=9 00:26:03.681 lat (msec) : 4=0.26%, 10=76.64%, 20=11.42%, 50=9.17%, 100=2.51% 00:26:03.681 cpu : usr=92.52%, sys=5.52%, ctx=11, majf=0, minf=0 00:26:03.681 IO depths : 1=4.0%, 2=96.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:03.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.681 issued rwts: total=1156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.681 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:03.681 00:26:03.681 Run status group 0 (all jobs): 00:26:03.681 READ: bw=100MiB/s (105MB/s), 26.4MiB/s-44.8MiB/s (27.7MB/s-46.9MB/s), io=501MiB (525MB), run=5001-5002msec 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:03.681 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 bdev_null0 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 [2024-12-05 12:29:33.581415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 bdev_null1 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 bdev_null2 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:03.682 { 00:26:03.682 "params": { 00:26:03.682 "name": "Nvme$subsystem", 00:26:03.682 "trtype": "$TEST_TRANSPORT", 00:26:03.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:03.682 "adrfam": "ipv4", 00:26:03.682 "trsvcid": "$NVMF_PORT", 00:26:03.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:03.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:03.682 "hdgst": ${hdgst:-false}, 00:26:03.682 "ddgst": ${ddgst:-false} 00:26:03.682 }, 00:26:03.682 "method": "bdev_nvme_attach_controller" 00:26:03.682 } 00:26:03.682 EOF 00:26:03.682 )") 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:03.682 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:03.683 { 00:26:03.683 "params": { 00:26:03.683 "name": "Nvme$subsystem", 00:26:03.683 "trtype": "$TEST_TRANSPORT", 00:26:03.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:03.683 "adrfam": "ipv4", 00:26:03.683 "trsvcid": "$NVMF_PORT", 00:26:03.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:03.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:03.683 "hdgst": ${hdgst:-false}, 00:26:03.683 "ddgst": ${ddgst:-false} 00:26:03.683 }, 00:26:03.683 "method": "bdev_nvme_attach_controller" 00:26:03.683 } 00:26:03.683 EOF 00:26:03.683 )") 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:03.683 { 00:26:03.683 "params": { 00:26:03.683 "name": "Nvme$subsystem", 00:26:03.683 "trtype": "$TEST_TRANSPORT", 00:26:03.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:03.683 "adrfam": "ipv4", 00:26:03.683 "trsvcid": "$NVMF_PORT", 00:26:03.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:03.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:03.683 "hdgst": ${hdgst:-false}, 00:26:03.683 "ddgst": ${ddgst:-false} 00:26:03.683 }, 00:26:03.683 "method": "bdev_nvme_attach_controller" 00:26:03.683 } 00:26:03.683 EOF 00:26:03.683 )") 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:03.683 "params": { 00:26:03.683 "name": "Nvme0", 00:26:03.683 "trtype": "tcp", 00:26:03.683 "traddr": "10.0.0.3", 00:26:03.683 "adrfam": "ipv4", 00:26:03.683 "trsvcid": "4420", 00:26:03.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:03.683 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:03.683 "hdgst": false, 00:26:03.683 "ddgst": false 00:26:03.683 }, 00:26:03.683 "method": "bdev_nvme_attach_controller" 00:26:03.683 },{ 00:26:03.683 "params": { 00:26:03.683 "name": "Nvme1", 00:26:03.683 "trtype": "tcp", 00:26:03.683 "traddr": "10.0.0.3", 00:26:03.683 "adrfam": "ipv4", 00:26:03.683 "trsvcid": "4420", 00:26:03.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:03.683 "hdgst": false, 00:26:03.683 "ddgst": false 00:26:03.683 }, 00:26:03.683 "method": "bdev_nvme_attach_controller" 00:26:03.683 },{ 00:26:03.683 "params": { 00:26:03.683 "name": "Nvme2", 00:26:03.683 "trtype": "tcp", 00:26:03.683 "traddr": "10.0.0.3", 00:26:03.683 "adrfam": "ipv4", 00:26:03.683 "trsvcid": "4420", 00:26:03.683 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:03.683 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:03.683 "hdgst": false, 00:26:03.683 "ddgst": false 00:26:03.683 }, 00:26:03.683 "method": "bdev_nvme_attach_controller" 00:26:03.683 }' 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:03.683 12:29:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:03.683 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:03.683 ... 00:26:03.683 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:03.683 ... 00:26:03.683 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:03.683 ... 00:26:03.683 fio-3.35 00:26:03.683 Starting 24 threads 00:26:15.883 00:26:15.883 filename0: (groupid=0, jobs=1): err= 0: pid=108066: Thu Dec 5 12:29:44 2024 00:26:15.883 read: IOPS=294, BW=1178KiB/s (1206kB/s)(11.5MiB/10038msec) 00:26:15.883 slat (usec): min=3, max=8045, avg=20.71, stdev=266.08 00:26:15.883 clat (msec): min=2, max=116, avg=54.11, stdev=21.25 00:26:15.883 lat (msec): min=2, max=116, avg=54.13, stdev=21.26 00:26:15.883 clat percentiles (msec): 00:26:15.883 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 28], 20.00th=[ 40], 00:26:15.883 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 61], 00:26:15.883 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 87], 00:26:15.883 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 116], 99.95th=[ 116], 00:26:15.883 | 99.99th=[ 116] 00:26:15.883 bw ( KiB/s): min= 896, max= 2712, per=4.85%, avg=1179.50, stdev=376.81, samples=20 00:26:15.883 iops : min= 224, max= 678, avg=294.80, stdev=94.21, samples=20 00:26:15.883 lat (msec) : 4=1.08%, 10=3.72%, 20=2.81%, 50=35.79%, 100=55.04% 00:26:15.883 lat (msec) : 250=1.56% 00:26:15.883 cpu : usr=37.51%, sys=0.64%, ctx=1235, majf=0, minf=0 00:26:15.883 IO depths : 1=0.7%, 2=1.8%, 4=8.3%, 8=76.4%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:15.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.883 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.883 issued rwts: total=2956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.883 filename0: (groupid=0, jobs=1): err= 0: pid=108067: Thu Dec 5 12:29:44 2024 00:26:15.883 read: IOPS=282, BW=1130KiB/s (1157kB/s)(11.1MiB/10058msec) 00:26:15.883 slat (usec): min=3, max=8031, avg=20.21, stdev=220.58 00:26:15.883 clat (msec): min=24, max=124, avg=56.49, stdev=17.37 00:26:15.883 lat (msec): min=24, max=124, avg=56.51, stdev=17.37 00:26:15.883 clat percentiles (msec): 00:26:15.883 | 1.00th=[ 25], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 41], 00:26:15.883 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 54], 60.00th=[ 60], 00:26:15.883 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 81], 95.00th=[ 89], 00:26:15.883 | 99.00th=[ 106], 99.50th=[ 107], 99.90th=[ 126], 99.95th=[ 126], 00:26:15.883 | 99.99th=[ 126] 00:26:15.883 bw ( KiB/s): min= 768, max= 1392, per=4.64%, avg=1129.65, stdev=158.56, samples=20 00:26:15.883 iops : min= 192, max= 348, avg=282.40, stdev=39.65, samples=20 00:26:15.883 lat (msec) : 50=44.24%, 100=53.89%, 250=1.87% 00:26:15.883 cpu : usr=41.03%, sys=0.71%, ctx=1098, majf=0, minf=9 00:26:15.883 IO depths : 1=0.8%, 2=1.8%, 4=8.3%, 8=76.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:15.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.883 complete : 0=0.0%, 4=89.5%, 8=5.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.883 issued rwts: total=2841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.883 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.883 filename0: (groupid=0, jobs=1): err= 0: pid=108068: Thu Dec 5 12:29:44 2024 00:26:15.883 read: IOPS=230, BW=922KiB/s (944kB/s)(9228KiB/10012msec) 00:26:15.883 slat (usec): min=5, max=8040, avg=32.60, stdev=387.22 00:26:15.883 clat (msec): min=23, max=169, avg=69.24, stdev=19.13 00:26:15.883 lat (msec): min=23, max=169, avg=69.27, stdev=19.13 00:26:15.883 clat percentiles (msec): 00:26:15.883 | 1.00th=[ 25], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 58], 00:26:15.883 | 30.00th=[ 61], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 71], 00:26:15.884 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 101], 00:26:15.884 | 99.00th=[ 128], 99.50th=[ 134], 99.90th=[ 169], 99.95th=[ 169], 00:26:15.884 | 99.99th=[ 169] 00:26:15.884 bw ( KiB/s): min= 640, max= 1024, per=3.77%, avg=917.47, stdev=110.15, samples=19 00:26:15.884 iops : min= 160, max= 256, avg=229.37, stdev=27.54, samples=19 00:26:15.884 lat (msec) : 50=12.14%, 100=82.88%, 250=4.98% 00:26:15.884 cpu : usr=32.69%, sys=0.55%, ctx=903, majf=0, minf=9 00:26:15.884 IO depths : 1=1.8%, 2=4.1%, 4=13.2%, 8=69.4%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:15.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 issued rwts: total=2307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.884 filename0: (groupid=0, jobs=1): err= 0: pid=108069: Thu Dec 5 12:29:44 2024 00:26:15.884 read: IOPS=231, BW=925KiB/s (947kB/s)(9272KiB/10029msec) 00:26:15.884 slat (usec): min=4, max=12027, avg=28.05, stdev=364.01 00:26:15.884 clat (msec): min=27, max=160, avg=69.04, stdev=18.59 00:26:15.884 lat (msec): min=27, max=160, avg=69.07, stdev=18.59 00:26:15.884 clat percentiles (msec): 00:26:15.884 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 57], 00:26:15.884 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 66], 60.00th=[ 71], 00:26:15.884 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 93], 95.00th=[ 100], 00:26:15.884 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 161], 99.95th=[ 161], 00:26:15.884 | 99.99th=[ 161] 00:26:15.884 bw ( KiB/s): min= 560, max= 1120, per=3.79%, avg=922.11, stdev=134.71, samples=19 00:26:15.884 iops : min= 140, max= 280, avg=230.53, stdev=33.68, samples=19 00:26:15.884 lat (msec) : 50=12.47%, 100=82.92%, 250=4.62% 00:26:15.884 cpu : usr=42.20%, sys=0.65%, ctx=1248, majf=0, minf=9 00:26:15.884 IO depths : 1=2.9%, 2=6.3%, 4=16.2%, 8=64.4%, 16=10.3%, 32=0.0%, >=64=0.0% 00:26:15.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 complete : 0=0.0%, 4=91.6%, 8=3.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.884 filename0: (groupid=0, jobs=1): err= 0: pid=108070: Thu Dec 5 12:29:44 2024 00:26:15.884 read: IOPS=263, BW=1055KiB/s (1081kB/s)(10.4MiB/10048msec) 00:26:15.884 slat (usec): min=5, max=7052, avg=15.19, stdev=136.97 00:26:15.884 clat (msec): min=23, max=114, avg=60.44, stdev=16.58 00:26:15.884 lat (msec): min=23, max=114, avg=60.45, stdev=16.58 00:26:15.884 clat percentiles (msec): 00:26:15.884 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 46], 00:26:15.884 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 64], 00:26:15.884 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 94], 00:26:15.884 | 99.00th=[ 108], 99.50th=[ 113], 99.90th=[ 115], 99.95th=[ 115], 00:26:15.884 | 99.99th=[ 115] 00:26:15.884 bw ( KiB/s): min= 768, max= 1280, per=4.34%, avg=1056.65, stdev=134.30, samples=20 00:26:15.884 iops : min= 192, max= 320, avg=264.15, stdev=33.59, samples=20 00:26:15.884 lat (msec) : 50=30.86%, 100=67.52%, 250=1.62% 00:26:15.884 cpu : usr=41.61%, sys=0.61%, ctx=1259, majf=0, minf=9 00:26:15.884 IO depths : 1=1.2%, 2=2.8%, 4=10.9%, 8=72.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:15.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 complete : 0=0.0%, 4=90.4%, 8=5.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 issued rwts: total=2651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.884 filename0: (groupid=0, jobs=1): err= 0: pid=108071: Thu Dec 5 12:29:44 2024 00:26:15.884 read: IOPS=281, BW=1125KiB/s (1152kB/s)(11.0MiB/10041msec) 00:26:15.884 slat (usec): min=6, max=8032, avg=18.69, stdev=226.37 00:26:15.884 clat (msec): min=6, max=133, avg=56.77, stdev=20.56 00:26:15.884 lat (msec): min=6, max=133, avg=56.78, stdev=20.56 00:26:15.884 clat percentiles (msec): 00:26:15.884 | 1.00th=[ 9], 5.00th=[ 28], 10.00th=[ 35], 20.00th=[ 41], 00:26:15.884 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 61], 00:26:15.884 | 70.00th=[ 67], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 94], 00:26:15.884 | 99.00th=[ 118], 99.50th=[ 130], 99.90th=[ 133], 99.95th=[ 133], 00:26:15.884 | 99.99th=[ 133] 00:26:15.884 bw ( KiB/s): min= 688, max= 2152, per=4.61%, avg=1122.65, stdev=282.12, samples=20 00:26:15.884 iops : min= 172, max= 538, avg=280.65, stdev=70.53, samples=20 00:26:15.884 lat (msec) : 10=2.27%, 20=1.63%, 50=39.18%, 100=53.31%, 250=3.61% 00:26:15.884 cpu : usr=33.66%, sys=0.53%, ctx=929, majf=0, minf=9 00:26:15.884 IO depths : 1=0.9%, 2=2.3%, 4=10.1%, 8=73.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:26:15.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 complete : 0=0.0%, 4=90.2%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 issued rwts: total=2823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.884 filename0: (groupid=0, jobs=1): err= 0: pid=108072: Thu Dec 5 12:29:44 2024 00:26:15.884 read: IOPS=240, BW=963KiB/s (986kB/s)(9684KiB/10057msec) 00:26:15.884 slat (usec): min=5, max=4048, avg=15.34, stdev=108.07 00:26:15.884 clat (msec): min=15, max=130, avg=66.26, stdev=19.49 00:26:15.884 lat (msec): min=15, max=130, avg=66.28, stdev=19.49 00:26:15.884 clat percentiles (msec): 00:26:15.884 | 1.00th=[ 26], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 49], 00:26:15.884 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 69], 00:26:15.884 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 105], 00:26:15.884 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 131], 00:26:15.884 | 99.99th=[ 131] 00:26:15.884 bw ( KiB/s): min= 712, max= 1200, per=3.95%, avg=962.00, stdev=128.99, samples=20 00:26:15.884 iops : min= 178, max= 300, avg=240.50, stdev=32.25, samples=20 00:26:15.884 lat (msec) : 20=0.66%, 50=19.99%, 100=73.98%, 250=5.37% 00:26:15.884 cpu : usr=39.20%, sys=0.66%, ctx=1189, majf=0, minf=9 00:26:15.884 IO depths : 1=1.8%, 2=4.2%, 4=13.4%, 8=69.6%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:15.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 complete : 0=0.0%, 4=90.8%, 8=3.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 issued rwts: total=2421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.884 filename0: (groupid=0, jobs=1): err= 0: pid=108073: Thu Dec 5 12:29:44 2024 00:26:15.884 read: IOPS=230, BW=924KiB/s (946kB/s)(9252KiB/10016msec) 00:26:15.884 slat (usec): min=6, max=8015, avg=23.27, stdev=249.52 00:26:15.884 clat (msec): min=32, max=149, avg=69.10, stdev=18.06 00:26:15.884 lat (msec): min=32, max=149, avg=69.12, stdev=18.06 00:26:15.884 clat percentiles (msec): 00:26:15.884 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 58], 00:26:15.884 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 66], 60.00th=[ 70], 00:26:15.884 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 90], 95.00th=[ 104], 00:26:15.884 | 99.00th=[ 133], 99.50th=[ 133], 99.90th=[ 150], 99.95th=[ 150], 00:26:15.884 | 99.99th=[ 150] 00:26:15.884 bw ( KiB/s): min= 640, max= 1136, per=3.80%, avg=924.21, stdev=115.76, samples=19 00:26:15.884 iops : min= 160, max= 284, avg=231.05, stdev=28.94, samples=19 00:26:15.884 lat (msec) : 50=10.85%, 100=82.66%, 250=6.49% 00:26:15.884 cpu : usr=39.38%, sys=0.58%, ctx=1146, majf=0, minf=9 00:26:15.884 IO depths : 1=2.7%, 2=6.1%, 4=16.3%, 8=64.6%, 16=10.3%, 32=0.0%, >=64=0.0% 00:26:15.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 complete : 0=0.0%, 4=91.5%, 8=3.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 issued rwts: total=2313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.884 filename1: (groupid=0, jobs=1): err= 0: pid=108074: Thu Dec 5 12:29:44 2024 00:26:15.884 read: IOPS=236, BW=944KiB/s (967kB/s)(9460KiB/10021msec) 00:26:15.884 slat (usec): min=3, max=8091, avg=20.50, stdev=236.87 00:26:15.884 clat (msec): min=26, max=141, avg=67.63, stdev=20.98 00:26:15.884 lat (msec): min=26, max=141, avg=67.65, stdev=20.98 00:26:15.884 clat percentiles (msec): 00:26:15.884 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 42], 20.00th=[ 52], 00:26:15.884 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 69], 00:26:15.884 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 108], 00:26:15.884 | 99.00th=[ 134], 99.50th=[ 134], 99.90th=[ 142], 99.95th=[ 142], 00:26:15.884 | 99.99th=[ 142] 00:26:15.884 bw ( KiB/s): min= 640, max= 1192, per=3.81%, avg=928.42, stdev=131.82, samples=19 00:26:15.884 iops : min= 160, max= 298, avg=232.11, stdev=32.95, samples=19 00:26:15.884 lat (msec) : 50=18.44%, 100=73.74%, 250=7.82% 00:26:15.884 cpu : usr=37.85%, sys=0.70%, ctx=1312, majf=0, minf=9 00:26:15.884 IO depths : 1=1.9%, 2=4.6%, 4=13.7%, 8=68.4%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:15.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 complete : 0=0.0%, 4=91.1%, 8=4.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 issued rwts: total=2365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.884 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.884 filename1: (groupid=0, jobs=1): err= 0: pid=108075: Thu Dec 5 12:29:44 2024 00:26:15.884 read: IOPS=239, BW=958KiB/s (981kB/s)(9584KiB/10008msec) 00:26:15.884 slat (usec): min=4, max=8042, avg=42.92, stdev=476.60 00:26:15.884 clat (msec): min=24, max=144, avg=66.49, stdev=19.14 00:26:15.884 lat (msec): min=24, max=144, avg=66.53, stdev=19.15 00:26:15.884 clat percentiles (msec): 00:26:15.884 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 50], 00:26:15.884 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 70], 00:26:15.884 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 95], 95.00th=[ 103], 00:26:15.884 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:26:15.884 | 99.99th=[ 144] 00:26:15.884 bw ( KiB/s): min= 512, max= 1152, per=3.91%, avg=952.42, stdev=150.98, samples=19 00:26:15.884 iops : min= 128, max= 288, avg=238.11, stdev=37.74, samples=19 00:26:15.884 lat (msec) : 50=22.29%, 100=72.66%, 250=5.05% 00:26:15.884 cpu : usr=32.92%, sys=0.41%, ctx=919, majf=0, minf=10 00:26:15.884 IO depths : 1=1.8%, 2=4.3%, 4=13.4%, 8=68.9%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:15.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.884 issued rwts: total=2396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.885 filename1: (groupid=0, jobs=1): err= 0: pid=108076: Thu Dec 5 12:29:44 2024 00:26:15.885 read: IOPS=280, BW=1124KiB/s (1151kB/s)(11.0MiB/10067msec) 00:26:15.885 slat (usec): min=5, max=4037, avg=20.12, stdev=184.90 00:26:15.885 clat (msec): min=9, max=136, avg=56.81, stdev=19.55 00:26:15.885 lat (msec): min=9, max=136, avg=56.83, stdev=19.54 00:26:15.885 clat percentiles (msec): 00:26:15.885 | 1.00th=[ 12], 5.00th=[ 28], 10.00th=[ 35], 20.00th=[ 41], 00:26:15.885 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 59], 60.00th=[ 61], 00:26:15.885 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 93], 00:26:15.885 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 136], 99.95th=[ 136], 00:26:15.885 | 99.99th=[ 136] 00:26:15.885 bw ( KiB/s): min= 768, max= 1904, per=4.61%, avg=1124.00, stdev=229.40, samples=20 00:26:15.885 iops : min= 192, max= 476, avg=281.00, stdev=57.35, samples=20 00:26:15.885 lat (msec) : 10=0.32%, 20=3.39%, 50=36.60%, 100=56.97%, 250=2.72% 00:26:15.885 cpu : usr=37.99%, sys=0.71%, ctx=1391, majf=0, minf=9 00:26:15.885 IO depths : 1=1.2%, 2=2.7%, 4=10.0%, 8=73.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:15.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 issued rwts: total=2828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.885 filename1: (groupid=0, jobs=1): err= 0: pid=108077: Thu Dec 5 12:29:44 2024 00:26:15.885 read: IOPS=241, BW=968KiB/s (991kB/s)(9712KiB/10034msec) 00:26:15.885 slat (usec): min=6, max=8028, avg=31.60, stdev=397.81 00:26:15.885 clat (msec): min=23, max=148, avg=65.81, stdev=18.17 00:26:15.885 lat (msec): min=23, max=148, avg=65.84, stdev=18.16 00:26:15.885 clat percentiles (msec): 00:26:15.885 | 1.00th=[ 34], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 49], 00:26:15.885 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:26:15.885 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 97], 00:26:15.885 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 148], 99.95th=[ 148], 00:26:15.885 | 99.99th=[ 148] 00:26:15.885 bw ( KiB/s): min= 768, max= 1152, per=3.96%, avg=964.40, stdev=84.36, samples=20 00:26:15.885 iops : min= 192, max= 288, avg=241.10, stdev=21.09, samples=20 00:26:15.885 lat (msec) : 50=21.79%, 100=74.14%, 250=4.08% 00:26:15.885 cpu : usr=32.61%, sys=0.54%, ctx=1023, majf=0, minf=9 00:26:15.885 IO depths : 1=1.4%, 2=3.3%, 4=11.8%, 8=71.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:15.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 issued rwts: total=2428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.885 filename1: (groupid=0, jobs=1): err= 0: pid=108078: Thu Dec 5 12:29:44 2024 00:26:15.885 read: IOPS=236, BW=947KiB/s (969kB/s)(9484KiB/10019msec) 00:26:15.885 slat (usec): min=5, max=8022, avg=24.44, stdev=283.95 00:26:15.885 clat (msec): min=29, max=150, avg=67.44, stdev=19.37 00:26:15.885 lat (msec): min=29, max=150, avg=67.46, stdev=19.37 00:26:15.885 clat percentiles (msec): 00:26:15.885 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 54], 00:26:15.885 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 69], 00:26:15.885 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 104], 00:26:15.885 | 99.00th=[ 118], 99.50th=[ 125], 99.90th=[ 150], 99.95th=[ 150], 00:26:15.885 | 99.99th=[ 150] 00:26:15.885 bw ( KiB/s): min= 640, max= 1120, per=3.86%, avg=940.32, stdev=133.97, samples=19 00:26:15.885 iops : min= 160, max= 280, avg=235.05, stdev=33.48, samples=19 00:26:15.885 lat (msec) : 50=18.18%, 100=75.37%, 250=6.45% 00:26:15.885 cpu : usr=33.56%, sys=0.54%, ctx=927, majf=0, minf=9 00:26:15.885 IO depths : 1=1.6%, 2=3.6%, 4=12.1%, 8=70.9%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:15.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 issued rwts: total=2371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.885 filename1: (groupid=0, jobs=1): err= 0: pid=108079: Thu Dec 5 12:29:44 2024 00:26:15.885 read: IOPS=268, BW=1075KiB/s (1101kB/s)(10.5MiB/10045msec) 00:26:15.885 slat (usec): min=3, max=3989, avg=12.64, stdev=76.88 00:26:15.885 clat (msec): min=5, max=160, avg=59.34, stdev=21.29 00:26:15.885 lat (msec): min=5, max=160, avg=59.35, stdev=21.29 00:26:15.885 clat percentiles (msec): 00:26:15.885 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 43], 00:26:15.885 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 63], 00:26:15.885 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 86], 95.00th=[ 95], 00:26:15.885 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 161], 99.95th=[ 161], 00:26:15.885 | 99.99th=[ 161] 00:26:15.885 bw ( KiB/s): min= 728, max= 1968, per=4.42%, avg=1076.25, stdev=263.86, samples=20 00:26:15.885 iops : min= 182, max= 492, avg=269.05, stdev=65.96, samples=20 00:26:15.885 lat (msec) : 10=1.19%, 20=1.19%, 50=34.27%, 100=59.95%, 250=3.41% 00:26:15.885 cpu : usr=32.60%, sys=0.59%, ctx=923, majf=0, minf=0 00:26:15.885 IO depths : 1=1.1%, 2=2.3%, 4=8.8%, 8=75.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:15.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 complete : 0=0.0%, 4=89.6%, 8=6.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 issued rwts: total=2699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.885 filename1: (groupid=0, jobs=1): err= 0: pid=108080: Thu Dec 5 12:29:44 2024 00:26:15.885 read: IOPS=254, BW=1019KiB/s (1044kB/s)(10.0MiB/10061msec) 00:26:15.885 slat (usec): min=3, max=7027, avg=19.05, stdev=189.01 00:26:15.885 clat (msec): min=20, max=132, avg=62.52, stdev=19.84 00:26:15.885 lat (msec): min=20, max=132, avg=62.54, stdev=19.84 00:26:15.885 clat percentiles (msec): 00:26:15.885 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 47], 00:26:15.885 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 64], 00:26:15.885 | 70.00th=[ 68], 80.00th=[ 78], 90.00th=[ 90], 95.00th=[ 100], 00:26:15.885 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 133], 99.95th=[ 133], 00:26:15.885 | 99.99th=[ 133] 00:26:15.885 bw ( KiB/s): min= 768, max= 1248, per=4.19%, avg=1019.10, stdev=153.61, samples=20 00:26:15.885 iops : min= 192, max= 312, avg=254.75, stdev=38.39, samples=20 00:26:15.885 lat (msec) : 50=26.40%, 100=69.23%, 250=4.37% 00:26:15.885 cpu : usr=43.41%, sys=0.68%, ctx=1225, majf=0, minf=9 00:26:15.885 IO depths : 1=1.6%, 2=3.7%, 4=11.2%, 8=71.9%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:15.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 issued rwts: total=2564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.885 filename1: (groupid=0, jobs=1): err= 0: pid=108081: Thu Dec 5 12:29:44 2024 00:26:15.885 read: IOPS=235, BW=943KiB/s (965kB/s)(9444KiB/10020msec) 00:26:15.885 slat (usec): min=3, max=4030, avg=19.88, stdev=165.16 00:26:15.885 clat (msec): min=25, max=132, avg=67.74, stdev=18.09 00:26:15.885 lat (msec): min=25, max=132, avg=67.76, stdev=18.09 00:26:15.885 clat percentiles (msec): 00:26:15.885 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 55], 00:26:15.885 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 70], 00:26:15.885 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 94], 95.00th=[ 105], 00:26:15.885 | 99.00th=[ 127], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:26:15.885 | 99.99th=[ 132] 00:26:15.885 bw ( KiB/s): min= 640, max= 1120, per=3.86%, avg=940.21, stdev=117.22, samples=19 00:26:15.885 iops : min= 160, max= 280, avg=235.05, stdev=29.31, samples=19 00:26:15.885 lat (msec) : 50=12.54%, 100=80.69%, 250=6.78% 00:26:15.885 cpu : usr=41.83%, sys=0.66%, ctx=1156, majf=0, minf=9 00:26:15.885 IO depths : 1=2.8%, 2=6.4%, 4=17.1%, 8=63.5%, 16=10.2%, 32=0.0%, >=64=0.0% 00:26:15.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 complete : 0=0.0%, 4=92.1%, 8=2.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 issued rwts: total=2361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.885 filename2: (groupid=0, jobs=1): err= 0: pid=108082: Thu Dec 5 12:29:44 2024 00:26:15.885 read: IOPS=246, BW=986KiB/s (1009kB/s)(9868KiB/10013msec) 00:26:15.885 slat (usec): min=5, max=8057, avg=25.77, stdev=322.31 00:26:15.885 clat (msec): min=10, max=135, avg=64.79, stdev=20.60 00:26:15.885 lat (msec): min=10, max=135, avg=64.81, stdev=20.61 00:26:15.885 clat percentiles (msec): 00:26:15.885 | 1.00th=[ 15], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 48], 00:26:15.885 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 70], 00:26:15.885 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 88], 95.00th=[ 96], 00:26:15.885 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 136], 00:26:15.885 | 99.99th=[ 136] 00:26:15.885 bw ( KiB/s): min= 640, max= 1650, per=4.02%, avg=979.70, stdev=204.17, samples=20 00:26:15.885 iops : min= 160, max= 412, avg=244.90, stdev=50.95, samples=20 00:26:15.885 lat (msec) : 20=1.30%, 50=23.10%, 100=71.46%, 250=4.13% 00:26:15.885 cpu : usr=32.97%, sys=0.48%, ctx=891, majf=0, minf=9 00:26:15.885 IO depths : 1=1.1%, 2=2.6%, 4=11.1%, 8=72.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:15.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 complete : 0=0.0%, 4=90.1%, 8=5.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.885 issued rwts: total=2467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.885 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.885 filename2: (groupid=0, jobs=1): err= 0: pid=108083: Thu Dec 5 12:29:44 2024 00:26:15.885 read: IOPS=232, BW=928KiB/s (951kB/s)(9300KiB/10017msec) 00:26:15.885 slat (usec): min=6, max=8059, avg=24.60, stdev=294.92 00:26:15.885 clat (msec): min=22, max=137, avg=68.76, stdev=19.70 00:26:15.885 lat (msec): min=22, max=137, avg=68.78, stdev=19.69 00:26:15.885 clat percentiles (msec): 00:26:15.885 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 55], 00:26:15.885 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 71], 00:26:15.885 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 107], 00:26:15.885 | 99.00th=[ 117], 99.50th=[ 136], 99.90th=[ 138], 99.95th=[ 138], 00:26:15.885 | 99.99th=[ 138] 00:26:15.885 bw ( KiB/s): min= 688, max= 1200, per=3.76%, avg=914.11, stdev=119.13, samples=19 00:26:15.885 iops : min= 172, max= 300, avg=228.53, stdev=29.78, samples=19 00:26:15.885 lat (msec) : 50=17.76%, 100=74.49%, 250=7.74% 00:26:15.886 cpu : usr=32.67%, sys=0.53%, ctx=1006, majf=0, minf=9 00:26:15.886 IO depths : 1=1.4%, 2=3.3%, 4=11.4%, 8=72.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:15.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 complete : 0=0.0%, 4=90.7%, 8=4.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.886 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.886 filename2: (groupid=0, jobs=1): err= 0: pid=108084: Thu Dec 5 12:29:44 2024 00:26:15.886 read: IOPS=280, BW=1122KiB/s (1149kB/s)(11.0MiB/10033msec) 00:26:15.886 slat (usec): min=6, max=5017, avg=17.55, stdev=154.09 00:26:15.886 clat (msec): min=9, max=151, avg=56.86, stdev=19.07 00:26:15.886 lat (msec): min=9, max=151, avg=56.88, stdev=19.08 00:26:15.886 clat percentiles (msec): 00:26:15.886 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 36], 20.00th=[ 41], 00:26:15.886 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 62], 00:26:15.886 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 90], 00:26:15.886 | 99.00th=[ 107], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 153], 00:26:15.886 | 99.99th=[ 153] 00:26:15.886 bw ( KiB/s): min= 720, max= 1698, per=4.61%, avg=1122.50, stdev=218.44, samples=20 00:26:15.886 iops : min= 180, max= 424, avg=280.60, stdev=54.54, samples=20 00:26:15.886 lat (msec) : 10=0.36%, 20=2.03%, 50=38.27%, 100=56.86%, 250=2.49% 00:26:15.886 cpu : usr=44.01%, sys=0.62%, ctx=1460, majf=0, minf=9 00:26:15.886 IO depths : 1=0.9%, 2=2.1%, 4=9.5%, 8=75.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:15.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 complete : 0=0.0%, 4=89.8%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 issued rwts: total=2814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.886 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.886 filename2: (groupid=0, jobs=1): err= 0: pid=108085: Thu Dec 5 12:29:44 2024 00:26:15.886 read: IOPS=249, BW=999KiB/s (1023kB/s)(9.80MiB/10048msec) 00:26:15.886 slat (usec): min=5, max=4031, avg=17.41, stdev=130.93 00:26:15.886 clat (msec): min=26, max=140, avg=63.97, stdev=17.55 00:26:15.886 lat (msec): min=26, max=140, avg=63.99, stdev=17.55 00:26:15.886 clat percentiles (msec): 00:26:15.886 | 1.00th=[ 33], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 49], 00:26:15.886 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 66], 00:26:15.886 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 97], 00:26:15.886 | 99.00th=[ 113], 99.50th=[ 128], 99.90th=[ 140], 99.95th=[ 140], 00:26:15.886 | 99.99th=[ 140] 00:26:15.886 bw ( KiB/s): min= 768, max= 1232, per=4.10%, avg=997.20, stdev=134.60, samples=20 00:26:15.886 iops : min= 192, max= 308, avg=249.30, stdev=33.65, samples=20 00:26:15.886 lat (msec) : 50=21.40%, 100=75.49%, 250=3.11% 00:26:15.886 cpu : usr=46.54%, sys=0.70%, ctx=1415, majf=0, minf=9 00:26:15.886 IO depths : 1=1.4%, 2=3.2%, 4=11.2%, 8=71.9%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:15.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 complete : 0=0.0%, 4=90.3%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 issued rwts: total=2509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.886 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.886 filename2: (groupid=0, jobs=1): err= 0: pid=108086: Thu Dec 5 12:29:44 2024 00:26:15.886 read: IOPS=247, BW=991KiB/s (1015kB/s)(9968KiB/10054msec) 00:26:15.886 slat (usec): min=3, max=8040, avg=21.94, stdev=278.28 00:26:15.886 clat (msec): min=22, max=136, avg=64.33, stdev=18.82 00:26:15.886 lat (msec): min=22, max=136, avg=64.35, stdev=18.82 00:26:15.886 clat percentiles (msec): 00:26:15.886 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 42], 20.00th=[ 48], 00:26:15.886 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 67], 00:26:15.886 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 89], 95.00th=[ 99], 00:26:15.886 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 130], 99.95th=[ 130], 00:26:15.886 | 99.99th=[ 138] 00:26:15.886 bw ( KiB/s): min= 768, max= 1216, per=4.07%, avg=990.50, stdev=134.44, samples=20 00:26:15.886 iops : min= 192, max= 304, avg=247.60, stdev=33.58, samples=20 00:26:15.886 lat (msec) : 50=25.92%, 100=69.74%, 250=4.33% 00:26:15.886 cpu : usr=32.67%, sys=0.57%, ctx=1004, majf=0, minf=9 00:26:15.886 IO depths : 1=1.0%, 2=2.4%, 4=10.2%, 8=74.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:15.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 issued rwts: total=2492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.886 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.886 filename2: (groupid=0, jobs=1): err= 0: pid=108087: Thu Dec 5 12:29:44 2024 00:26:15.886 read: IOPS=278, BW=1115KiB/s (1142kB/s)(11.0MiB/10055msec) 00:26:15.886 slat (usec): min=5, max=8021, avg=16.05, stdev=170.32 00:26:15.886 clat (msec): min=15, max=121, avg=57.17, stdev=18.00 00:26:15.886 lat (msec): min=15, max=121, avg=57.19, stdev=18.00 00:26:15.886 clat percentiles (msec): 00:26:15.886 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 43], 00:26:15.886 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 57], 60.00th=[ 61], 00:26:15.886 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 94], 00:26:15.886 | 99.00th=[ 103], 99.50th=[ 108], 99.90th=[ 122], 99.95th=[ 122], 00:26:15.886 | 99.99th=[ 122] 00:26:15.886 bw ( KiB/s): min= 728, max= 1504, per=4.59%, avg=1116.95, stdev=200.09, samples=20 00:26:15.886 iops : min= 182, max= 376, avg=279.20, stdev=50.07, samples=20 00:26:15.886 lat (msec) : 20=0.57%, 50=43.05%, 100=54.81%, 250=1.57% 00:26:15.886 cpu : usr=44.19%, sys=0.72%, ctx=1115, majf=0, minf=9 00:26:15.886 IO depths : 1=1.3%, 2=3.1%, 4=11.5%, 8=72.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:15.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 issued rwts: total=2804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.886 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.886 filename2: (groupid=0, jobs=1): err= 0: pid=108088: Thu Dec 5 12:29:44 2024 00:26:15.886 read: IOPS=270, BW=1081KiB/s (1107kB/s)(10.6MiB/10039msec) 00:26:15.886 slat (usec): min=5, max=4041, avg=16.38, stdev=133.80 00:26:15.886 clat (msec): min=24, max=123, avg=59.07, stdev=17.58 00:26:15.886 lat (msec): min=24, max=123, avg=59.08, stdev=17.58 00:26:15.886 clat percentiles (msec): 00:26:15.886 | 1.00th=[ 30], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 43], 00:26:15.886 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 61], 00:26:15.886 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 88], 95.00th=[ 93], 00:26:15.886 | 99.00th=[ 106], 99.50th=[ 116], 99.90th=[ 124], 99.95th=[ 124], 00:26:15.886 | 99.99th=[ 124] 00:26:15.886 bw ( KiB/s): min= 768, max= 1328, per=4.43%, avg=1078.25, stdev=137.15, samples=20 00:26:15.886 iops : min= 192, max= 332, avg=269.55, stdev=34.29, samples=20 00:26:15.886 lat (msec) : 50=33.70%, 100=64.97%, 250=1.33% 00:26:15.886 cpu : usr=44.91%, sys=0.65%, ctx=1321, majf=0, minf=9 00:26:15.886 IO depths : 1=2.5%, 2=5.5%, 4=14.6%, 8=66.8%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:15.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 complete : 0=0.0%, 4=91.2%, 8=3.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 issued rwts: total=2712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.886 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.886 filename2: (groupid=0, jobs=1): err= 0: pid=108089: Thu Dec 5 12:29:44 2024 00:26:15.886 read: IOPS=246, BW=985KiB/s (1009kB/s)(9908KiB/10057msec) 00:26:15.886 slat (usec): min=4, max=8017, avg=24.37, stdev=280.29 00:26:15.886 clat (msec): min=15, max=130, avg=64.72, stdev=19.99 00:26:15.886 lat (msec): min=16, max=130, avg=64.75, stdev=19.99 00:26:15.886 clat percentiles (msec): 00:26:15.886 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:26:15.886 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 68], 00:26:15.886 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 95], 95.00th=[ 104], 00:26:15.886 | 99.00th=[ 126], 99.50th=[ 131], 99.90th=[ 131], 99.95th=[ 131], 00:26:15.886 | 99.99th=[ 131] 00:26:15.886 bw ( KiB/s): min= 552, max= 1216, per=4.05%, avg=985.70, stdev=159.78, samples=20 00:26:15.886 iops : min= 138, max= 304, avg=246.40, stdev=39.95, samples=20 00:26:15.886 lat (msec) : 20=0.08%, 50=25.51%, 100=68.91%, 250=5.49% 00:26:15.886 cpu : usr=39.02%, sys=0.62%, ctx=1139, majf=0, minf=9 00:26:15.886 IO depths : 1=1.7%, 2=4.2%, 4=13.6%, 8=68.9%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:15.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 complete : 0=0.0%, 4=90.9%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.886 issued rwts: total=2477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.886 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:15.886 00:26:15.886 Run status group 0 (all jobs): 00:26:15.886 READ: bw=23.8MiB/s (24.9MB/s), 922KiB/s-1178KiB/s (944kB/s-1206kB/s), io=239MiB (251MB), run=10008-10067msec 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.886 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.887 bdev_null0 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.887 [2024-12-05 12:29:45.219408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.887 bdev_null1 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:15.887 { 00:26:15.887 "params": { 00:26:15.887 "name": "Nvme$subsystem", 00:26:15.887 "trtype": "$TEST_TRANSPORT", 00:26:15.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.887 "adrfam": "ipv4", 00:26:15.887 "trsvcid": "$NVMF_PORT", 00:26:15.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.887 "hdgst": ${hdgst:-false}, 00:26:15.887 "ddgst": ${ddgst:-false} 00:26:15.887 }, 00:26:15.887 "method": "bdev_nvme_attach_controller" 00:26:15.887 } 00:26:15.887 EOF 00:26:15.887 )") 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:15.887 { 00:26:15.887 "params": { 00:26:15.887 "name": "Nvme$subsystem", 00:26:15.887 "trtype": "$TEST_TRANSPORT", 00:26:15.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.887 "adrfam": "ipv4", 00:26:15.887 "trsvcid": "$NVMF_PORT", 00:26:15.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.887 "hdgst": ${hdgst:-false}, 00:26:15.887 "ddgst": ${ddgst:-false} 00:26:15.887 }, 00:26:15.887 "method": "bdev_nvme_attach_controller" 00:26:15.887 } 00:26:15.887 EOF 00:26:15.887 )") 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:26:15.887 12:29:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:15.887 "params": { 00:26:15.887 "name": "Nvme0", 00:26:15.887 "trtype": "tcp", 00:26:15.887 "traddr": "10.0.0.3", 00:26:15.887 "adrfam": "ipv4", 00:26:15.887 "trsvcid": "4420", 00:26:15.887 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:15.887 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:15.887 "hdgst": false, 00:26:15.887 "ddgst": false 00:26:15.887 }, 00:26:15.887 "method": "bdev_nvme_attach_controller" 00:26:15.887 },{ 00:26:15.887 "params": { 00:26:15.887 "name": "Nvme1", 00:26:15.887 "trtype": "tcp", 00:26:15.887 "traddr": "10.0.0.3", 00:26:15.887 "adrfam": "ipv4", 00:26:15.887 "trsvcid": "4420", 00:26:15.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:15.887 "hdgst": false, 00:26:15.887 "ddgst": false 00:26:15.888 }, 00:26:15.888 "method": "bdev_nvme_attach_controller" 00:26:15.888 }' 00:26:15.888 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:15.888 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:15.888 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.888 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:15.888 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:15.888 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:15.888 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:15.888 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:15.888 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:15.888 12:29:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.888 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:15.888 ... 00:26:15.888 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:15.888 ... 00:26:15.888 fio-3.35 00:26:15.888 Starting 4 threads 00:26:21.152 00:26:21.152 filename0: (groupid=0, jobs=1): err= 0: pid=108221: Thu Dec 5 12:29:51 2024 00:26:21.152 read: IOPS=2239, BW=17.5MiB/s (18.3MB/s)(87.5MiB/5001msec) 00:26:21.152 slat (usec): min=6, max=104, avg=10.98, stdev= 8.39 00:26:21.152 clat (usec): min=872, max=4621, avg=3515.28, stdev=230.14 00:26:21.152 lat (usec): min=888, max=4652, avg=3526.26, stdev=229.20 00:26:21.152 clat percentiles (usec): 00:26:21.152 | 1.00th=[ 3228], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3425], 00:26:21.152 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3523], 00:26:21.152 | 70.00th=[ 3556], 80.00th=[ 3621], 90.00th=[ 3687], 95.00th=[ 3785], 00:26:21.152 | 99.00th=[ 3982], 99.50th=[ 4080], 99.90th=[ 4424], 99.95th=[ 4555], 00:26:21.152 | 99.99th=[ 4621] 00:26:21.152 bw ( KiB/s): min=17280, max=18688, per=25.12%, avg=17948.44, stdev=382.81, samples=9 00:26:21.152 iops : min= 2160, max= 2336, avg=2243.56, stdev=47.85, samples=9 00:26:21.152 lat (usec) : 1000=0.07% 00:26:21.152 lat (msec) : 2=0.50%, 4=98.62%, 10=0.80% 00:26:21.152 cpu : usr=95.22%, sys=3.32%, ctx=5, majf=0, minf=0 00:26:21.152 IO depths : 1=12.0%, 2=24.9%, 4=50.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.152 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.152 issued rwts: total=11200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.152 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:21.152 filename0: (groupid=0, jobs=1): err= 0: pid=108222: Thu Dec 5 12:29:51 2024 00:26:21.152 read: IOPS=2229, BW=17.4MiB/s (18.3MB/s)(87.1MiB/5001msec) 00:26:21.152 slat (nsec): min=3874, max=97792, avg=15898.71, stdev=7585.12 00:26:21.152 clat (usec): min=2603, max=5691, avg=3510.19, stdev=164.31 00:26:21.152 lat (usec): min=2614, max=5704, avg=3526.09, stdev=163.78 00:26:21.152 clat percentiles (usec): 00:26:21.152 | 1.00th=[ 3228], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3392], 00:26:21.152 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523], 00:26:21.152 | 70.00th=[ 3556], 80.00th=[ 3621], 90.00th=[ 3687], 95.00th=[ 3785], 00:26:21.152 | 99.00th=[ 3982], 99.50th=[ 4146], 99.90th=[ 4621], 99.95th=[ 5604], 00:26:21.152 | 99.99th=[ 5669] 00:26:21.152 bw ( KiB/s): min=17280, max=18176, per=25.00%, avg=17863.11, stdev=272.37, samples=9 00:26:21.152 iops : min= 2160, max= 2272, avg=2232.89, stdev=34.05, samples=9 00:26:21.152 lat (msec) : 4=99.01%, 10=0.99% 00:26:21.152 cpu : usr=95.12%, sys=3.46%, ctx=8, majf=0, minf=0 00:26:21.152 IO depths : 1=12.2%, 2=25.0%, 4=50.0%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.152 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.152 issued rwts: total=11152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.152 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:21.152 filename1: (groupid=0, jobs=1): err= 0: pid=108223: Thu Dec 5 12:29:51 2024 00:26:21.152 read: IOPS=2231, BW=17.4MiB/s (18.3MB/s)(87.2MiB/5001msec) 00:26:21.152 slat (nsec): min=4067, max=97927, avg=14332.59, stdev=6462.39 00:26:21.152 clat (usec): min=908, max=5882, avg=3513.16, stdev=183.66 00:26:21.152 lat (usec): min=915, max=5889, avg=3527.50, stdev=183.88 00:26:21.152 clat percentiles (usec): 00:26:21.152 | 1.00th=[ 3228], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:26:21.152 | 30.00th=[ 3458], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523], 00:26:21.152 | 70.00th=[ 3556], 80.00th=[ 3621], 90.00th=[ 3687], 95.00th=[ 3752], 00:26:21.152 | 99.00th=[ 3982], 99.50th=[ 4113], 99.90th=[ 5669], 99.95th=[ 5669], 00:26:21.152 | 99.99th=[ 5800] 00:26:21.152 bw ( KiB/s): min=17280, max=18176, per=25.02%, avg=17877.33, stdev=286.22, samples=9 00:26:21.152 iops : min= 2160, max= 2272, avg=2234.67, stdev=35.78, samples=9 00:26:21.152 lat (usec) : 1000=0.03% 00:26:21.152 lat (msec) : 2=0.04%, 4=99.00%, 10=0.93% 00:26:21.152 cpu : usr=94.88%, sys=3.68%, ctx=46, majf=0, minf=0 00:26:21.152 IO depths : 1=11.7%, 2=24.9%, 4=50.1%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.152 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.152 issued rwts: total=11160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.152 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:21.152 filename1: (groupid=0, jobs=1): err= 0: pid=108224: Thu Dec 5 12:29:51 2024 00:26:21.152 read: IOPS=2231, BW=17.4MiB/s (18.3MB/s)(87.2MiB/5001msec) 00:26:21.152 slat (nsec): min=5280, max=88845, avg=14887.64, stdev=7630.27 00:26:21.152 clat (usec): min=943, max=6558, avg=3516.27, stdev=206.86 00:26:21.152 lat (usec): min=950, max=6574, avg=3531.15, stdev=206.42 00:26:21.152 clat percentiles (usec): 00:26:21.152 | 1.00th=[ 3195], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:26:21.152 | 30.00th=[ 3458], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3523], 00:26:21.152 | 70.00th=[ 3556], 80.00th=[ 3621], 90.00th=[ 3687], 95.00th=[ 3785], 00:26:21.153 | 99.00th=[ 4080], 99.50th=[ 4424], 99.90th=[ 5669], 99.95th=[ 6521], 00:26:21.153 | 99.99th=[ 6521] 00:26:21.153 bw ( KiB/s): min=17328, max=18128, per=24.97%, avg=17843.50, stdev=239.83, samples=10 00:26:21.153 iops : min= 2166, max= 2266, avg=2230.40, stdev=29.96, samples=10 00:26:21.153 lat (usec) : 1000=0.03% 00:26:21.153 lat (msec) : 2=0.07%, 4=98.49%, 10=1.42% 00:26:21.153 cpu : usr=94.86%, sys=3.70%, ctx=10, majf=0, minf=0 00:26:21.153 IO depths : 1=9.8%, 2=20.6%, 4=54.4%, 8=15.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.153 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.153 issued rwts: total=11158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.153 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:21.153 00:26:21.153 Run status group 0 (all jobs): 00:26:21.153 READ: bw=69.8MiB/s (73.2MB/s), 17.4MiB/s-17.5MiB/s (18.3MB/s-18.3MB/s), io=349MiB (366MB), run=5001-5001msec 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.153 00:26:21.153 real 0m24.005s 00:26:21.153 user 2m7.897s 00:26:21.153 sys 0m3.795s 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:21.153 12:29:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:21.153 ************************************ 00:26:21.153 END TEST fio_dif_rand_params 00:26:21.153 ************************************ 00:26:21.153 12:29:51 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:21.153 12:29:51 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:21.153 12:29:51 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:21.153 12:29:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:21.153 ************************************ 00:26:21.153 START TEST fio_dif_digest 00:26:21.153 ************************************ 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:21.153 bdev_null0 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:21.153 [2024-12-05 12:29:51.551416] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:21.153 { 00:26:21.153 "params": { 00:26:21.153 "name": "Nvme$subsystem", 00:26:21.153 "trtype": "$TEST_TRANSPORT", 00:26:21.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.153 "adrfam": "ipv4", 00:26:21.153 "trsvcid": "$NVMF_PORT", 00:26:21.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.153 "hdgst": ${hdgst:-false}, 00:26:21.153 "ddgst": ${ddgst:-false} 00:26:21.153 }, 00:26:21.153 "method": "bdev_nvme_attach_controller" 00:26:21.153 } 00:26:21.153 EOF 00:26:21.153 )") 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:21.153 "params": { 00:26:21.153 "name": "Nvme0", 00:26:21.153 "trtype": "tcp", 00:26:21.153 "traddr": "10.0.0.3", 00:26:21.153 "adrfam": "ipv4", 00:26:21.153 "trsvcid": "4420", 00:26:21.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:21.153 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:21.153 "hdgst": true, 00:26:21.153 "ddgst": true 00:26:21.153 }, 00:26:21.153 "method": "bdev_nvme_attach_controller" 00:26:21.153 }' 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:21.153 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:21.154 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.154 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:21.154 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:21.154 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:21.154 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:21.154 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:21.154 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:21.154 12:29:51 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.154 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:21.154 ... 00:26:21.154 fio-3.35 00:26:21.154 Starting 3 threads 00:26:33.364 00:26:33.364 filename0: (groupid=0, jobs=1): err= 0: pid=108326: Thu Dec 5 12:30:02 2024 00:26:33.364 read: IOPS=208, BW=26.1MiB/s (27.3MB/s)(261MiB/10004msec) 00:26:33.364 slat (nsec): min=5802, max=85345, avg=19280.03, stdev=6366.98 00:26:33.364 clat (usec): min=5904, max=27560, avg=14371.33, stdev=2309.76 00:26:33.364 lat (usec): min=5923, max=27580, avg=14390.61, stdev=2310.78 00:26:33.364 clat percentiles (usec): 00:26:33.364 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[13829], 00:26:33.364 | 30.00th=[14222], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:26:33.364 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16450], 95.00th=[16909], 00:26:33.364 | 99.00th=[17957], 99.50th=[20055], 99.90th=[24511], 99.95th=[26870], 00:26:33.364 | 99.99th=[27657] 00:26:33.364 bw ( KiB/s): min=24576, max=30720, per=29.77%, avg=26720.95, stdev=1623.98, samples=19 00:26:33.364 iops : min= 192, max= 240, avg=208.74, stdev=12.71, samples=19 00:26:33.364 lat (msec) : 10=10.65%, 20=88.92%, 50=0.43% 00:26:33.364 cpu : usr=93.90%, sys=4.44%, ctx=41, majf=0, minf=9 00:26:33.364 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:33.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.364 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.364 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:33.364 filename0: (groupid=0, jobs=1): err= 0: pid=108327: Thu Dec 5 12:30:02 2024 00:26:33.364 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(307MiB/10005msec) 00:26:33.364 slat (nsec): min=6374, max=72242, avg=14292.98, stdev=6138.94 00:26:33.364 clat (usec): min=6037, max=54177, avg=12186.46, stdev=2428.52 00:26:33.364 lat (usec): min=6055, max=54187, avg=12200.75, stdev=2428.02 00:26:33.364 clat percentiles (usec): 00:26:33.364 | 1.00th=[ 7111], 5.00th=[ 7635], 10.00th=[ 8291], 20.00th=[11338], 00:26:33.364 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:26:33.364 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14091], 95.00th=[14484], 00:26:33.364 | 99.00th=[15664], 99.50th=[16712], 99.90th=[52167], 99.95th=[52691], 00:26:33.364 | 99.99th=[54264] 00:26:33.364 bw ( KiB/s): min=28614, max=35072, per=35.11%, avg=31511.89, stdev=1880.11, samples=19 00:26:33.364 iops : min= 223, max= 274, avg=246.16, stdev=14.74, samples=19 00:26:33.364 lat (msec) : 10=13.99%, 20=85.77%, 50=0.12%, 100=0.12% 00:26:33.364 cpu : usr=93.81%, sys=4.46%, ctx=24, majf=0, minf=9 00:26:33.364 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:33.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.364 issued rwts: total=2459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.364 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:33.364 filename0: (groupid=0, jobs=1): err= 0: pid=108328: Thu Dec 5 12:30:02 2024 00:26:33.364 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(309MiB/10008msec) 00:26:33.364 slat (nsec): min=6485, max=77610, avg=14896.63, stdev=5817.33 00:26:33.364 clat (usec): min=6482, max=64292, avg=12121.69, stdev=7017.29 00:26:33.364 lat (usec): min=6492, max=64311, avg=12136.59, stdev=7017.17 00:26:33.364 clat percentiles (usec): 00:26:33.364 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:26:33.364 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:26:33.364 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12125], 95.00th=[12780], 00:26:33.364 | 99.00th=[52167], 99.50th=[53216], 99.90th=[56886], 99.95th=[57934], 00:26:33.364 | 99.99th=[64226] 00:26:33.364 bw ( KiB/s): min=22016, max=36608, per=35.27%, avg=31656.26, stdev=3907.64, samples=19 00:26:33.364 iops : min= 172, max= 286, avg=247.26, stdev=30.52, samples=19 00:26:33.364 lat (msec) : 10=12.17%, 20=84.84%, 50=0.16%, 100=2.83% 00:26:33.364 cpu : usr=94.06%, sys=4.25%, ctx=18, majf=0, minf=0 00:26:33.364 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:33.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.364 issued rwts: total=2473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.364 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:33.364 00:26:33.364 Run status group 0 (all jobs): 00:26:33.364 READ: bw=87.6MiB/s (91.9MB/s), 26.1MiB/s-30.9MiB/s (27.3MB/s-32.4MB/s), io=877MiB (920MB), run=10004-10008msec 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:33.364 ************************************ 00:26:33.364 END TEST fio_dif_digest 00:26:33.364 ************************************ 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.364 00:26:33.364 real 0m11.052s 00:26:33.364 user 0m28.852s 00:26:33.364 sys 0m1.620s 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.364 12:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:33.364 12:30:02 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:33.364 12:30:02 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:33.364 12:30:02 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:33.364 12:30:02 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:26:33.364 12:30:02 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:33.364 12:30:02 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:26:33.364 12:30:02 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:33.364 12:30:02 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:33.364 rmmod nvme_tcp 00:26:33.364 rmmod nvme_fabrics 00:26:33.364 rmmod nvme_keyring 00:26:33.364 12:30:02 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.364 12:30:02 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:26:33.364 12:30:02 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:26:33.364 12:30:02 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 107572 ']' 00:26:33.364 12:30:02 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 107572 00:26:33.364 12:30:02 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 107572 ']' 00:26:33.364 12:30:02 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 107572 00:26:33.365 12:30:02 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:26:33.365 12:30:02 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:33.365 12:30:02 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107572 00:26:33.365 killing process with pid 107572 00:26:33.365 12:30:02 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:33.365 12:30:02 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:33.365 12:30:02 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107572' 00:26:33.365 12:30:02 nvmf_dif -- common/autotest_common.sh@973 -- # kill 107572 00:26:33.365 12:30:02 nvmf_dif -- common/autotest_common.sh@978 -- # wait 107572 00:26:33.365 12:30:02 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:26:33.365 12:30:02 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:33.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:33.365 Waiting for block devices as requested 00:26:33.365 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:33.365 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.365 12:30:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:33.365 12:30:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.365 12:30:03 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:26:33.365 00:26:33.365 real 1m0.411s 00:26:33.365 user 3m53.657s 00:26:33.365 sys 0m13.350s 00:26:33.365 ************************************ 00:26:33.365 END TEST nvmf_dif 00:26:33.365 12:30:03 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.365 12:30:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:33.365 ************************************ 00:26:33.365 12:30:03 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:33.365 12:30:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:33.365 12:30:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.365 12:30:03 -- common/autotest_common.sh@10 -- # set +x 00:26:33.365 ************************************ 00:26:33.365 START TEST nvmf_abort_qd_sizes 00:26:33.365 ************************************ 00:26:33.365 12:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:33.365 * Looking for test storage... 00:26:33.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:33.365 12:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:33.365 12:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:26:33.365 12:30:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:33.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.365 --rc genhtml_branch_coverage=1 00:26:33.365 --rc genhtml_function_coverage=1 00:26:33.365 --rc genhtml_legend=1 00:26:33.365 --rc geninfo_all_blocks=1 00:26:33.365 --rc geninfo_unexecuted_blocks=1 00:26:33.365 00:26:33.365 ' 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:33.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.365 --rc genhtml_branch_coverage=1 00:26:33.365 --rc genhtml_function_coverage=1 00:26:33.365 --rc genhtml_legend=1 00:26:33.365 --rc geninfo_all_blocks=1 00:26:33.365 --rc geninfo_unexecuted_blocks=1 00:26:33.365 00:26:33.365 ' 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:33.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.365 --rc genhtml_branch_coverage=1 00:26:33.365 --rc genhtml_function_coverage=1 00:26:33.365 --rc genhtml_legend=1 00:26:33.365 --rc geninfo_all_blocks=1 00:26:33.365 --rc geninfo_unexecuted_blocks=1 00:26:33.365 00:26:33.365 ' 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:33.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.365 --rc genhtml_branch_coverage=1 00:26:33.365 --rc genhtml_function_coverage=1 00:26:33.365 --rc genhtml_legend=1 00:26:33.365 --rc geninfo_all_blocks=1 00:26:33.365 --rc geninfo_unexecuted_blocks=1 00:26:33.365 00:26:33.365 ' 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.365 12:30:04 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:33.366 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:33.366 Cannot find device "nvmf_init_br" 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:33.366 Cannot find device "nvmf_init_br2" 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:33.366 Cannot find device "nvmf_tgt_br" 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:33.366 Cannot find device "nvmf_tgt_br2" 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:33.366 Cannot find device "nvmf_init_br" 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:33.366 Cannot find device "nvmf_init_br2" 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:33.366 Cannot find device "nvmf_tgt_br" 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:33.366 Cannot find device "nvmf_tgt_br2" 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:26:33.366 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:33.625 Cannot find device "nvmf_br" 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:33.625 Cannot find device "nvmf_init_if" 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:33.625 Cannot find device "nvmf_init_if2" 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:33.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:33.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:33.625 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:33.884 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:33.884 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.384 ms 00:26:33.884 00:26:33.884 --- 10.0.0.3 ping statistics --- 00:26:33.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.884 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:26:33.884 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:33.884 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:33.884 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.144 ms 00:26:33.884 00:26:33.884 --- 10.0.0.4 ping statistics --- 00:26:33.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.884 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:26:33.884 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:33.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:33.884 00:26:33.884 --- 10.0.0.1 ping statistics --- 00:26:33.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.884 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:33.884 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:33.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:26:33.884 00:26:33.884 --- 10.0.0.2 ping statistics --- 00:26:33.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.885 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:26:33.885 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.885 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:26:33.885 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:26:33.885 12:30:04 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:34.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:34.450 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:34.707 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=108977 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 108977 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 108977 ']' 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.707 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:34.707 [2024-12-05 12:30:05.508903] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:26:34.707 [2024-12-05 12:30:05.509200] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.965 [2024-12-05 12:30:05.662617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:34.965 [2024-12-05 12:30:05.724074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.965 [2024-12-05 12:30:05.724451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.965 [2024-12-05 12:30:05.724697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.965 [2024-12-05 12:30:05.724849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.965 [2024-12-05 12:30:05.725036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.965 [2024-12-05 12:30:05.726431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.965 [2024-12-05 12:30:05.726576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.965 [2024-12-05 12:30:05.726674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:34.965 [2024-12-05 12:30:05.726680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:26:35.223 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:35.224 12:30:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:35.224 ************************************ 00:26:35.224 START TEST spdk_target_abort 00:26:35.224 ************************************ 00:26:35.224 12:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:26:35.224 12:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:35.224 12:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:26:35.224 12:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.224 12:30:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.224 spdk_targetn1 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.224 [2024-12-05 12:30:06.048748] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.224 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:35.482 [2024-12-05 12:30:06.093567] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:35.482 12:30:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:38.764 Initializing NVMe Controllers 00:26:38.764 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:26:38.764 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:38.764 Initialization complete. Launching workers. 00:26:38.764 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10993, failed: 0 00:26:38.764 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1168, failed to submit 9825 00:26:38.764 success 762, unsuccessful 406, failed 0 00:26:38.764 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:38.764 12:30:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:42.048 Initializing NVMe Controllers 00:26:42.048 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:26:42.048 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:42.048 Initialization complete. Launching workers. 00:26:42.048 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5974, failed: 0 00:26:42.048 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1269, failed to submit 4705 00:26:42.048 success 231, unsuccessful 1038, failed 0 00:26:42.048 12:30:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:42.048 12:30:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:45.325 Initializing NVMe Controllers 00:26:45.325 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:26:45.325 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:45.325 Initialization complete. Launching workers. 00:26:45.325 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30684, failed: 0 00:26:45.325 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2577, failed to submit 28107 00:26:45.325 success 526, unsuccessful 2051, failed 0 00:26:45.325 12:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:45.325 12:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.325 12:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.325 12:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.325 12:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:45.325 12:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.325 12:30:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:45.890 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.890 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 108977 00:26:45.890 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 108977 ']' 00:26:45.890 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 108977 00:26:45.890 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:26:45.890 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.890 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108977 00:26:45.890 killing process with pid 108977 00:26:45.890 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:45.890 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:45.890 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108977' 00:26:45.890 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 108977 00:26:45.890 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 108977 00:26:46.148 ************************************ 00:26:46.148 00:26:46.148 real 0m10.950s 00:26:46.148 user 0m42.353s 00:26:46.148 sys 0m1.820s 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:46.148 END TEST spdk_target_abort 00:26:46.148 ************************************ 00:26:46.148 12:30:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:46.148 12:30:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:46.148 12:30:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:46.148 12:30:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:46.148 ************************************ 00:26:46.148 START TEST kernel_target_abort 00:26:46.148 ************************************ 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:46.148 12:30:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:46.148 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:46.148 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:46.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:46.776 Waiting for block devices as requested 00:26:46.776 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:46.776 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:46.776 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:46.776 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:46.776 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:46.776 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:46.776 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:46.776 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:46.776 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:46.776 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:46.776 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:47.059 No valid GPT data, bailing 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:47.059 No valid GPT data, bailing 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:47.059 No valid GPT data, bailing 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:47.059 No valid GPT data, bailing 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:26:47.059 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:47.060 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:26:47.060 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:26:47.060 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:26:47.060 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:47.060 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 --hostid=bd2c7f89-972d-42a2-ab59-8d5d886644e3 -a 10.0.0.1 -t tcp -s 4420 00:26:47.324 00:26:47.324 Discovery Log Number of Records 2, Generation counter 2 00:26:47.324 =====Discovery Log Entry 0====== 00:26:47.324 trtype: tcp 00:26:47.324 adrfam: ipv4 00:26:47.324 subtype: current discovery subsystem 00:26:47.324 treq: not specified, sq flow control disable supported 00:26:47.324 portid: 1 00:26:47.324 trsvcid: 4420 00:26:47.324 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:47.324 traddr: 10.0.0.1 00:26:47.324 eflags: none 00:26:47.324 sectype: none 00:26:47.324 =====Discovery Log Entry 1====== 00:26:47.324 trtype: tcp 00:26:47.324 adrfam: ipv4 00:26:47.324 subtype: nvme subsystem 00:26:47.324 treq: not specified, sq flow control disable supported 00:26:47.324 portid: 1 00:26:47.324 trsvcid: 4420 00:26:47.324 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:47.324 traddr: 10.0.0.1 00:26:47.324 eflags: none 00:26:47.324 sectype: none 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:47.324 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:47.325 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:47.325 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:47.325 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:47.325 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:47.325 12:30:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:50.607 Initializing NVMe Controllers 00:26:50.607 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:50.607 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:50.607 Initialization complete. Launching workers. 00:26:50.607 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28322, failed: 0 00:26:50.607 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28322, failed to submit 0 00:26:50.607 success 0, unsuccessful 28322, failed 0 00:26:50.607 12:30:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:50.607 12:30:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:53.885 Initializing NVMe Controllers 00:26:53.885 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:53.885 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:53.885 Initialization complete. Launching workers. 00:26:53.885 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61503, failed: 0 00:26:53.885 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24284, failed to submit 37219 00:26:53.885 success 0, unsuccessful 24284, failed 0 00:26:53.885 12:30:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:53.885 12:30:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:57.162 Initializing NVMe Controllers 00:26:57.162 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:57.162 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:57.162 Initialization complete. Launching workers. 00:26:57.162 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101187, failed: 0 00:26:57.162 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25328, failed to submit 75859 00:26:57.162 success 0, unsuccessful 25328, failed 0 00:26:57.162 12:30:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:57.162 12:30:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:57.162 12:30:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:26:57.162 12:30:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:57.162 12:30:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:57.162 12:30:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:57.162 12:30:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:57.162 12:30:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:57.162 12:30:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:57.162 12:30:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:57.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:59.952 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:00.211 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:00.211 00:27:00.211 real 0m13.953s 00:27:00.211 user 0m5.615s 00:27:00.211 sys 0m5.396s 00:27:00.211 12:30:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.211 12:30:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:00.211 ************************************ 00:27:00.211 END TEST kernel_target_abort 00:27:00.211 ************************************ 00:27:00.211 12:30:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:00.211 12:30:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:00.211 12:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:00.211 12:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:27:00.211 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:00.211 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:27:00.211 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:00.211 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:00.211 rmmod nvme_tcp 00:27:00.470 rmmod nvme_fabrics 00:27:00.470 rmmod nvme_keyring 00:27:00.470 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:00.470 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:27:00.471 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:27:00.471 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 108977 ']' 00:27:00.471 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 108977 00:27:00.471 12:30:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 108977 ']' 00:27:00.471 12:30:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 108977 00:27:00.471 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (108977) - No such process 00:27:00.471 Process with pid 108977 is not found 00:27:00.471 12:30:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 108977 is not found' 00:27:00.471 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:27:00.471 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:00.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:00.730 Waiting for block devices as requested 00:27:00.730 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:00.989 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:00.989 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:01.247 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:01.247 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:01.247 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:01.247 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:01.247 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:01.247 12:30:31 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.247 12:30:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:01.247 12:30:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.247 12:30:32 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:27:01.247 00:27:01.247 real 0m28.129s 00:27:01.247 user 0m49.155s 00:27:01.247 sys 0m8.738s 00:27:01.247 12:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.247 12:30:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:01.247 ************************************ 00:27:01.247 END TEST nvmf_abort_qd_sizes 00:27:01.247 ************************************ 00:27:01.247 12:30:32 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:27:01.247 12:30:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:01.247 12:30:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.247 12:30:32 -- common/autotest_common.sh@10 -- # set +x 00:27:01.247 ************************************ 00:27:01.247 START TEST keyring_file 00:27:01.247 ************************************ 00:27:01.247 12:30:32 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:27:01.507 * Looking for test storage... 00:27:01.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:27:01.507 12:30:32 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:01.507 12:30:32 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:01.507 12:30:32 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:27:01.507 12:30:32 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@345 -- # : 1 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@353 -- # local d=1 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@355 -- # echo 1 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@353 -- # local d=2 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@355 -- # echo 2 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@368 -- # return 0 00:27:01.507 12:30:32 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:01.507 12:30:32 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.507 --rc genhtml_branch_coverage=1 00:27:01.507 --rc genhtml_function_coverage=1 00:27:01.507 --rc genhtml_legend=1 00:27:01.507 --rc geninfo_all_blocks=1 00:27:01.507 --rc geninfo_unexecuted_blocks=1 00:27:01.507 00:27:01.507 ' 00:27:01.507 12:30:32 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.507 --rc genhtml_branch_coverage=1 00:27:01.507 --rc genhtml_function_coverage=1 00:27:01.507 --rc genhtml_legend=1 00:27:01.507 --rc geninfo_all_blocks=1 00:27:01.507 --rc geninfo_unexecuted_blocks=1 00:27:01.507 00:27:01.507 ' 00:27:01.507 12:30:32 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.507 --rc genhtml_branch_coverage=1 00:27:01.507 --rc genhtml_function_coverage=1 00:27:01.507 --rc genhtml_legend=1 00:27:01.507 --rc geninfo_all_blocks=1 00:27:01.507 --rc geninfo_unexecuted_blocks=1 00:27:01.507 00:27:01.507 ' 00:27:01.507 12:30:32 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.507 --rc genhtml_branch_coverage=1 00:27:01.507 --rc genhtml_function_coverage=1 00:27:01.507 --rc genhtml_legend=1 00:27:01.507 --rc geninfo_all_blocks=1 00:27:01.507 --rc geninfo_unexecuted_blocks=1 00:27:01.507 00:27:01.507 ' 00:27:01.507 12:30:32 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:27:01.507 12:30:32 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.507 12:30:32 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.507 12:30:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.507 12:30:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.507 12:30:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.507 12:30:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:01.507 12:30:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@51 -- # : 0 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:01.507 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:01.507 12:30:32 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:01.508 12:30:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:01.508 12:30:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:01.508 12:30:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:01.508 12:30:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:01.508 12:30:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:01.508 12:30:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BySExFZQNL 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BySExFZQNL 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BySExFZQNL 00:27:01.508 12:30:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.BySExFZQNL 00:27:01.508 12:30:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hY9JO5UCaL 00:27:01.508 12:30:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:27:01.508 12:30:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:27:01.766 12:30:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hY9JO5UCaL 00:27:01.766 12:30:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hY9JO5UCaL 00:27:01.766 12:30:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.hY9JO5UCaL 00:27:01.766 12:30:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=109886 00:27:01.766 12:30:32 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.766 12:30:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 109886 00:27:01.766 12:30:32 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 109886 ']' 00:27:01.766 12:30:32 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.766 12:30:32 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.766 12:30:32 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.766 12:30:32 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.766 12:30:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:01.766 [2024-12-05 12:30:32.494009] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:27:01.766 [2024-12-05 12:30:32.494774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109886 ] 00:27:02.024 [2024-12-05 12:30:32.654926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.024 [2024-12-05 12:30:32.721724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:27:02.960 12:30:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:02.960 [2024-12-05 12:30:33.524520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.960 null0 00:27:02.960 [2024-12-05 12:30:33.556492] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:02.960 [2024-12-05 12:30:33.556697] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.960 12:30:33 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:02.960 [2024-12-05 12:30:33.584479] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:02.960 2024/12/05 12:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:27:02.960 request: 00:27:02.960 { 00:27:02.960 "method": "nvmf_subsystem_add_listener", 00:27:02.960 "params": { 00:27:02.960 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.960 "secure_channel": false, 00:27:02.960 "listen_address": { 00:27:02.960 "trtype": "tcp", 00:27:02.960 "traddr": "127.0.0.1", 00:27:02.960 "trsvcid": "4420" 00:27:02.960 } 00:27:02.960 } 00:27:02.960 } 00:27:02.960 Got JSON-RPC error response 00:27:02.960 GoRPCClient: error on JSON-RPC call 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:02.960 12:30:33 keyring_file -- keyring/file.sh@47 -- # bperfpid=109921 00:27:02.960 12:30:33 keyring_file -- keyring/file.sh@49 -- # waitforlisten 109921 /var/tmp/bperf.sock 00:27:02.960 12:30:33 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 109921 ']' 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.960 12:30:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:02.960 [2024-12-05 12:30:33.652557] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:27:02.960 [2024-12-05 12:30:33.652659] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109921 ] 00:27:02.960 [2024-12-05 12:30:33.802789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.220 [2024-12-05 12:30:33.864227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.787 12:30:34 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:03.787 12:30:34 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:27:03.787 12:30:34 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BySExFZQNL 00:27:03.787 12:30:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BySExFZQNL 00:27:04.045 12:30:34 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.hY9JO5UCaL 00:27:04.045 12:30:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.hY9JO5UCaL 00:27:04.317 12:30:35 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:27:04.317 12:30:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:04.317 12:30:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:04.317 12:30:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:04.317 12:30:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:04.575 12:30:35 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BySExFZQNL == \/\t\m\p\/\t\m\p\.\B\y\S\E\x\F\Z\Q\N\L ]] 00:27:04.575 12:30:35 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:27:04.575 12:30:35 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:27:04.575 12:30:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:04.575 12:30:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:04.575 12:30:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:04.833 12:30:35 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.hY9JO5UCaL == \/\t\m\p\/\t\m\p\.\h\Y\9\J\O\5\U\C\a\L ]] 00:27:04.833 12:30:35 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:27:04.833 12:30:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:04.833 12:30:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:04.833 12:30:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:04.833 12:30:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:04.833 12:30:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:05.090 12:30:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:05.090 12:30:35 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:27:05.090 12:30:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:05.090 12:30:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:05.090 12:30:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:05.090 12:30:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:05.090 12:30:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:05.348 12:30:36 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:27:05.348 12:30:36 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:05.348 12:30:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:05.606 [2024-12-05 12:30:36.359738] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:05.606 nvme0n1 00:27:05.607 12:30:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:27:05.607 12:30:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:05.607 12:30:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:05.607 12:30:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:05.607 12:30:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:05.607 12:30:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:06.173 12:30:36 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:27:06.173 12:30:36 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:27:06.173 12:30:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:06.173 12:30:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:06.173 12:30:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:06.173 12:30:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:06.173 12:30:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:06.173 12:30:36 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:27:06.173 12:30:36 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:06.432 Running I/O for 1 seconds... 00:27:07.368 13323.00 IOPS, 52.04 MiB/s 00:27:07.368 Latency(us) 00:27:07.368 [2024-12-05T12:30:38.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.368 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:07.368 nvme0n1 : 1.01 13369.36 52.22 0.00 0.00 9547.52 4438.57 21805.61 00:27:07.368 [2024-12-05T12:30:38.237Z] =================================================================================================================== 00:27:07.368 [2024-12-05T12:30:38.237Z] Total : 13369.36 52.22 0.00 0.00 9547.52 4438.57 21805.61 00:27:07.368 { 00:27:07.368 "results": [ 00:27:07.368 { 00:27:07.368 "job": "nvme0n1", 00:27:07.368 "core_mask": "0x2", 00:27:07.368 "workload": "randrw", 00:27:07.368 "percentage": 50, 00:27:07.368 "status": "finished", 00:27:07.368 "queue_depth": 128, 00:27:07.368 "io_size": 4096, 00:27:07.368 "runtime": 1.006181, 00:27:07.368 "iops": 13369.363961354866, 00:27:07.368 "mibps": 52.224077974042444, 00:27:07.368 "io_failed": 0, 00:27:07.368 "io_timeout": 0, 00:27:07.368 "avg_latency_us": 9547.518788960073, 00:27:07.368 "min_latency_us": 4438.574545454546, 00:27:07.368 "max_latency_us": 21805.614545454544 00:27:07.368 } 00:27:07.368 ], 00:27:07.368 "core_count": 1 00:27:07.368 } 00:27:07.368 12:30:38 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:07.368 12:30:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:07.626 12:30:38 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:27:07.626 12:30:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:07.626 12:30:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:07.627 12:30:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:07.627 12:30:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:07.627 12:30:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:07.885 12:30:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:07.885 12:30:38 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:27:07.885 12:30:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:07.885 12:30:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:07.885 12:30:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:07.885 12:30:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:07.885 12:30:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:08.143 12:30:38 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:27:08.143 12:30:38 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:08.143 12:30:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:27:08.143 12:30:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:08.143 12:30:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:27:08.143 12:30:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.143 12:30:38 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:27:08.143 12:30:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:08.143 12:30:38 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:08.143 12:30:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:08.401 [2024-12-05 12:30:39.140703] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:08.401 [2024-12-05 12:30:39.141411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf16530 (107): Transport endpoint is not connected 00:27:08.401 [2024-12-05 12:30:39.142393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf16530 (9): Bad file descriptor 00:27:08.401 [2024-12-05 12:30:39.143390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:27:08.401 [2024-12-05 12:30:39.143420] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:08.401 [2024-12-05 12:30:39.143430] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:27:08.401 [2024-12-05 12:30:39.143440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:27:08.401 2024/12/05 12:30:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:27:08.401 request: 00:27:08.401 { 00:27:08.401 "method": "bdev_nvme_attach_controller", 00:27:08.401 "params": { 00:27:08.401 "name": "nvme0", 00:27:08.401 "trtype": "tcp", 00:27:08.401 "traddr": "127.0.0.1", 00:27:08.401 "adrfam": "ipv4", 00:27:08.401 "trsvcid": "4420", 00:27:08.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:08.401 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:08.401 "prchk_reftag": false, 00:27:08.401 "prchk_guard": false, 00:27:08.401 "hdgst": false, 00:27:08.401 "ddgst": false, 00:27:08.401 "psk": "key1", 00:27:08.401 "allow_unrecognized_csi": false 00:27:08.401 } 00:27:08.401 } 00:27:08.401 Got JSON-RPC error response 00:27:08.401 GoRPCClient: error on JSON-RPC call 00:27:08.401 12:30:39 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:27:08.401 12:30:39 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:08.401 12:30:39 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:08.401 12:30:39 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:08.401 12:30:39 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:27:08.401 12:30:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:08.401 12:30:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:08.401 12:30:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:08.401 12:30:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:08.401 12:30:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:08.660 12:30:39 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:08.660 12:30:39 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:27:08.660 12:30:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:08.660 12:30:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:08.660 12:30:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:08.660 12:30:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:08.660 12:30:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:08.918 12:30:39 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:27:08.918 12:30:39 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:27:08.918 12:30:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:09.176 12:30:39 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:27:09.176 12:30:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:09.433 12:30:40 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:27:09.433 12:30:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:09.433 12:30:40 keyring_file -- keyring/file.sh@78 -- # jq length 00:27:09.691 12:30:40 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:27:09.691 12:30:40 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.BySExFZQNL 00:27:09.691 12:30:40 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.BySExFZQNL 00:27:09.691 12:30:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:27:09.691 12:30:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.BySExFZQNL 00:27:09.691 12:30:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:27:09.691 12:30:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:09.691 12:30:40 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:27:09.691 12:30:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:09.691 12:30:40 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BySExFZQNL 00:27:09.691 12:30:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BySExFZQNL 00:27:09.948 [2024-12-05 12:30:40.773925] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BySExFZQNL': 0100660 00:27:09.948 [2024-12-05 12:30:40.774561] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:09.948 2024/12/05 12:30:40 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.BySExFZQNL], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:27:09.948 request: 00:27:09.948 { 00:27:09.948 "method": "keyring_file_add_key", 00:27:09.948 "params": { 00:27:09.948 "name": "key0", 00:27:09.948 "path": "/tmp/tmp.BySExFZQNL" 00:27:09.948 } 00:27:09.948 } 00:27:09.948 Got JSON-RPC error response 00:27:09.948 GoRPCClient: error on JSON-RPC call 00:27:09.948 12:30:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:27:09.948 12:30:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:09.948 12:30:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:09.948 12:30:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:09.948 12:30:40 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.BySExFZQNL 00:27:09.948 12:30:40 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BySExFZQNL 00:27:09.948 12:30:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BySExFZQNL 00:27:10.206 12:30:41 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.BySExFZQNL 00:27:10.206 12:30:41 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:27:10.206 12:30:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:10.206 12:30:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:10.206 12:30:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:10.206 12:30:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:10.206 12:30:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:10.464 12:30:41 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:27:10.464 12:30:41 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:10.464 12:30:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:27:10.464 12:30:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:10.464 12:30:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:27:10.464 12:30:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.464 12:30:41 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:27:10.464 12:30:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.464 12:30:41 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:10.464 12:30:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:10.721 [2024-12-05 12:30:41.502057] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.BySExFZQNL': No such file or directory 00:27:10.721 [2024-12-05 12:30:41.502090] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:10.721 [2024-12-05 12:30:41.502107] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:10.721 [2024-12-05 12:30:41.502117] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:27:10.721 [2024-12-05 12:30:41.502125] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:10.721 [2024-12-05 12:30:41.502133] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:10.721 2024/12/05 12:30:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:27:10.721 request: 00:27:10.721 { 00:27:10.721 "method": "bdev_nvme_attach_controller", 00:27:10.721 "params": { 00:27:10.721 "name": "nvme0", 00:27:10.721 "trtype": "tcp", 00:27:10.721 "traddr": "127.0.0.1", 00:27:10.721 "adrfam": "ipv4", 00:27:10.721 "trsvcid": "4420", 00:27:10.721 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:10.721 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:10.721 "prchk_reftag": false, 00:27:10.721 "prchk_guard": false, 00:27:10.721 "hdgst": false, 00:27:10.721 "ddgst": false, 00:27:10.721 "psk": "key0", 00:27:10.721 "allow_unrecognized_csi": false 00:27:10.721 } 00:27:10.721 } 00:27:10.721 Got JSON-RPC error response 00:27:10.721 GoRPCClient: error on JSON-RPC call 00:27:10.721 12:30:41 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:27:10.721 12:30:41 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.721 12:30:41 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.721 12:30:41 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.721 12:30:41 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:27:10.721 12:30:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:10.978 12:30:41 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:10.978 12:30:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:10.978 12:30:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:10.978 12:30:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:10.978 12:30:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:10.978 12:30:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:10.978 12:30:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dIYNTbJ7ys 00:27:10.978 12:30:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:10.978 12:30:41 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:10.978 12:30:41 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:27:10.978 12:30:41 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:27:10.978 12:30:41 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:27:10.978 12:30:41 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:27:10.978 12:30:41 keyring_file -- nvmf/common.sh@733 -- # python - 00:27:10.978 12:30:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dIYNTbJ7ys 00:27:10.978 12:30:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dIYNTbJ7ys 00:27:10.978 12:30:41 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.dIYNTbJ7ys 00:27:10.978 12:30:41 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dIYNTbJ7ys 00:27:10.978 12:30:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dIYNTbJ7ys 00:27:11.236 12:30:42 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:11.236 12:30:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:11.493 nvme0n1 00:27:11.493 12:30:42 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:27:11.493 12:30:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:11.493 12:30:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:11.493 12:30:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:11.493 12:30:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:11.493 12:30:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:11.751 12:30:42 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:27:11.751 12:30:42 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:27:11.751 12:30:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:12.007 12:30:42 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:27:12.007 12:30:42 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:27:12.007 12:30:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:12.007 12:30:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:12.007 12:30:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:12.264 12:30:43 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:27:12.264 12:30:43 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:27:12.264 12:30:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:12.264 12:30:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:12.264 12:30:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:12.264 12:30:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:12.264 12:30:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:12.521 12:30:43 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:27:12.521 12:30:43 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:12.521 12:30:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:12.778 12:30:43 keyring_file -- keyring/file.sh@105 -- # jq length 00:27:12.778 12:30:43 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:27:12.778 12:30:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:13.035 12:30:43 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:27:13.035 12:30:43 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dIYNTbJ7ys 00:27:13.035 12:30:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dIYNTbJ7ys 00:27:13.293 12:30:44 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.hY9JO5UCaL 00:27:13.293 12:30:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.hY9JO5UCaL 00:27:13.550 12:30:44 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:13.550 12:30:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:13.808 nvme0n1 00:27:13.808 12:30:44 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:27:13.808 12:30:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:14.374 12:30:44 keyring_file -- keyring/file.sh@113 -- # config='{ 00:27:14.374 "subsystems": [ 00:27:14.374 { 00:27:14.374 "subsystem": "keyring", 00:27:14.374 "config": [ 00:27:14.374 { 00:27:14.374 "method": "keyring_file_add_key", 00:27:14.374 "params": { 00:27:14.374 "name": "key0", 00:27:14.374 "path": "/tmp/tmp.dIYNTbJ7ys" 00:27:14.374 } 00:27:14.374 }, 00:27:14.374 { 00:27:14.374 "method": "keyring_file_add_key", 00:27:14.374 "params": { 00:27:14.374 "name": "key1", 00:27:14.374 "path": "/tmp/tmp.hY9JO5UCaL" 00:27:14.374 } 00:27:14.374 } 00:27:14.374 ] 00:27:14.374 }, 00:27:14.374 { 00:27:14.374 "subsystem": "iobuf", 00:27:14.374 "config": [ 00:27:14.374 { 00:27:14.374 "method": "iobuf_set_options", 00:27:14.374 "params": { 00:27:14.374 "enable_numa": false, 00:27:14.374 "large_bufsize": 135168, 00:27:14.374 "large_pool_count": 1024, 00:27:14.374 "small_bufsize": 8192, 00:27:14.374 "small_pool_count": 8192 00:27:14.374 } 00:27:14.374 } 00:27:14.374 ] 00:27:14.374 }, 00:27:14.374 { 00:27:14.374 "subsystem": "sock", 00:27:14.374 "config": [ 00:27:14.374 { 00:27:14.374 "method": "sock_set_default_impl", 00:27:14.374 "params": { 00:27:14.374 "impl_name": "posix" 00:27:14.374 } 00:27:14.374 }, 00:27:14.374 { 00:27:14.374 "method": "sock_impl_set_options", 00:27:14.374 "params": { 00:27:14.374 "enable_ktls": false, 00:27:14.374 "enable_placement_id": 0, 00:27:14.374 "enable_quickack": false, 00:27:14.374 "enable_recv_pipe": true, 00:27:14.374 "enable_zerocopy_send_client": false, 00:27:14.374 "enable_zerocopy_send_server": true, 00:27:14.374 "impl_name": "ssl", 00:27:14.374 "recv_buf_size": 4096, 00:27:14.374 "send_buf_size": 4096, 00:27:14.374 "tls_version": 0, 00:27:14.374 "zerocopy_threshold": 0 00:27:14.374 } 00:27:14.374 }, 00:27:14.374 { 00:27:14.374 "method": "sock_impl_set_options", 00:27:14.374 "params": { 00:27:14.374 "enable_ktls": false, 00:27:14.374 "enable_placement_id": 0, 00:27:14.375 "enable_quickack": false, 00:27:14.375 "enable_recv_pipe": true, 00:27:14.375 "enable_zerocopy_send_client": false, 00:27:14.375 "enable_zerocopy_send_server": true, 00:27:14.375 "impl_name": "posix", 00:27:14.375 "recv_buf_size": 2097152, 00:27:14.375 "send_buf_size": 2097152, 00:27:14.375 "tls_version": 0, 00:27:14.375 "zerocopy_threshold": 0 00:27:14.375 } 00:27:14.375 } 00:27:14.375 ] 00:27:14.375 }, 00:27:14.375 { 00:27:14.375 "subsystem": "vmd", 00:27:14.375 "config": [] 00:27:14.375 }, 00:27:14.375 { 00:27:14.375 "subsystem": "accel", 00:27:14.375 "config": [ 00:27:14.375 { 00:27:14.375 "method": "accel_set_options", 00:27:14.375 "params": { 00:27:14.375 "buf_count": 2048, 00:27:14.375 "large_cache_size": 16, 00:27:14.375 "sequence_count": 2048, 00:27:14.375 "small_cache_size": 128, 00:27:14.375 "task_count": 2048 00:27:14.375 } 00:27:14.375 } 00:27:14.375 ] 00:27:14.375 }, 00:27:14.375 { 00:27:14.375 "subsystem": "bdev", 00:27:14.375 "config": [ 00:27:14.375 { 00:27:14.375 "method": "bdev_set_options", 00:27:14.375 "params": { 00:27:14.375 "bdev_auto_examine": true, 00:27:14.375 "bdev_io_cache_size": 256, 00:27:14.375 "bdev_io_pool_size": 65535, 00:27:14.375 "iobuf_large_cache_size": 16, 00:27:14.375 "iobuf_small_cache_size": 128 00:27:14.375 } 00:27:14.375 }, 00:27:14.375 { 00:27:14.375 "method": "bdev_raid_set_options", 00:27:14.375 "params": { 00:27:14.375 "process_max_bandwidth_mb_sec": 0, 00:27:14.375 "process_window_size_kb": 1024 00:27:14.375 } 00:27:14.375 }, 00:27:14.375 { 00:27:14.375 "method": "bdev_iscsi_set_options", 00:27:14.375 "params": { 00:27:14.375 "timeout_sec": 30 00:27:14.375 } 00:27:14.375 }, 00:27:14.375 { 00:27:14.375 "method": "bdev_nvme_set_options", 00:27:14.375 "params": { 00:27:14.375 "action_on_timeout": "none", 00:27:14.375 "allow_accel_sequence": false, 00:27:14.375 "arbitration_burst": 0, 00:27:14.375 "bdev_retry_count": 3, 00:27:14.375 "ctrlr_loss_timeout_sec": 0, 00:27:14.375 "delay_cmd_submit": true, 00:27:14.375 "dhchap_dhgroups": [ 00:27:14.375 "null", 00:27:14.375 "ffdhe2048", 00:27:14.375 "ffdhe3072", 00:27:14.375 "ffdhe4096", 00:27:14.375 "ffdhe6144", 00:27:14.375 "ffdhe8192" 00:27:14.375 ], 00:27:14.375 "dhchap_digests": [ 00:27:14.375 "sha256", 00:27:14.375 "sha384", 00:27:14.375 "sha512" 00:27:14.375 ], 00:27:14.375 "disable_auto_failback": false, 00:27:14.375 "fast_io_fail_timeout_sec": 0, 00:27:14.375 "generate_uuids": false, 00:27:14.375 "high_priority_weight": 0, 00:27:14.375 "io_path_stat": false, 00:27:14.375 "io_queue_requests": 512, 00:27:14.375 "keep_alive_timeout_ms": 10000, 00:27:14.375 "low_priority_weight": 0, 00:27:14.375 "medium_priority_weight": 0, 00:27:14.375 "nvme_adminq_poll_period_us": 10000, 00:27:14.375 "nvme_error_stat": false, 00:27:14.375 "nvme_ioq_poll_period_us": 0, 00:27:14.375 "rdma_cm_event_timeout_ms": 0, 00:27:14.375 "rdma_max_cq_size": 0, 00:27:14.375 "rdma_srq_size": 0, 00:27:14.375 "reconnect_delay_sec": 0, 00:27:14.375 "timeout_admin_us": 0, 00:27:14.375 "timeout_us": 0, 00:27:14.375 "transport_ack_timeout": 0, 00:27:14.375 "transport_retry_count": 4, 00:27:14.375 "transport_tos": 0 00:27:14.375 } 00:27:14.375 }, 00:27:14.375 { 00:27:14.375 "method": "bdev_nvme_attach_controller", 00:27:14.375 "params": { 00:27:14.375 "adrfam": "IPv4", 00:27:14.375 "ctrlr_loss_timeout_sec": 0, 00:27:14.375 "ddgst": false, 00:27:14.375 "fast_io_fail_timeout_sec": 0, 00:27:14.375 "hdgst": false, 00:27:14.375 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:14.375 "multipath": "multipath", 00:27:14.375 "name": "nvme0", 00:27:14.375 "prchk_guard": false, 00:27:14.375 "prchk_reftag": false, 00:27:14.375 "psk": "key0", 00:27:14.375 "reconnect_delay_sec": 0, 00:27:14.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:14.375 "traddr": "127.0.0.1", 00:27:14.375 "trsvcid": "4420", 00:27:14.375 "trtype": "TCP" 00:27:14.375 } 00:27:14.375 }, 00:27:14.375 { 00:27:14.375 "method": "bdev_nvme_set_hotplug", 00:27:14.375 "params": { 00:27:14.375 "enable": false, 00:27:14.375 "period_us": 100000 00:27:14.375 } 00:27:14.375 }, 00:27:14.375 { 00:27:14.375 "method": "bdev_wait_for_examine" 00:27:14.375 } 00:27:14.375 ] 00:27:14.375 }, 00:27:14.375 { 00:27:14.375 "subsystem": "nbd", 00:27:14.375 "config": [] 00:27:14.375 } 00:27:14.375 ] 00:27:14.375 }' 00:27:14.375 12:30:44 keyring_file -- keyring/file.sh@115 -- # killprocess 109921 00:27:14.375 12:30:44 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 109921 ']' 00:27:14.375 12:30:44 keyring_file -- common/autotest_common.sh@958 -- # kill -0 109921 00:27:14.375 12:30:44 keyring_file -- common/autotest_common.sh@959 -- # uname 00:27:14.375 12:30:44 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:14.375 12:30:44 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109921 00:27:14.375 killing process with pid 109921 00:27:14.375 Received shutdown signal, test time was about 1.000000 seconds 00:27:14.375 00:27:14.375 Latency(us) 00:27:14.375 [2024-12-05T12:30:45.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.375 [2024-12-05T12:30:45.244Z] =================================================================================================================== 00:27:14.375 [2024-12-05T12:30:45.244Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.375 12:30:45 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:14.375 12:30:45 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:14.375 12:30:45 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109921' 00:27:14.375 12:30:45 keyring_file -- common/autotest_common.sh@973 -- # kill 109921 00:27:14.375 12:30:45 keyring_file -- common/autotest_common.sh@978 -- # wait 109921 00:27:14.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:14.634 12:30:45 keyring_file -- keyring/file.sh@118 -- # bperfpid=110391 00:27:14.634 12:30:45 keyring_file -- keyring/file.sh@120 -- # waitforlisten 110391 /var/tmp/bperf.sock 00:27:14.634 12:30:45 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 110391 ']' 00:27:14.634 12:30:45 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:14.634 12:30:45 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.634 12:30:45 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:27:14.634 "subsystems": [ 00:27:14.634 { 00:27:14.634 "subsystem": "keyring", 00:27:14.634 "config": [ 00:27:14.634 { 00:27:14.634 "method": "keyring_file_add_key", 00:27:14.634 "params": { 00:27:14.634 "name": "key0", 00:27:14.634 "path": "/tmp/tmp.dIYNTbJ7ys" 00:27:14.634 } 00:27:14.634 }, 00:27:14.634 { 00:27:14.634 "method": "keyring_file_add_key", 00:27:14.634 "params": { 00:27:14.634 "name": "key1", 00:27:14.634 "path": "/tmp/tmp.hY9JO5UCaL" 00:27:14.634 } 00:27:14.634 } 00:27:14.634 ] 00:27:14.634 }, 00:27:14.634 { 00:27:14.634 "subsystem": "iobuf", 00:27:14.634 "config": [ 00:27:14.634 { 00:27:14.634 "method": "iobuf_set_options", 00:27:14.634 "params": { 00:27:14.634 "enable_numa": false, 00:27:14.634 "large_bufsize": 135168, 00:27:14.634 "large_pool_count": 1024, 00:27:14.634 "small_bufsize": 8192, 00:27:14.634 "small_pool_count": 8192 00:27:14.634 } 00:27:14.634 } 00:27:14.634 ] 00:27:14.634 }, 00:27:14.634 { 00:27:14.634 "subsystem": "sock", 00:27:14.634 "config": [ 00:27:14.634 { 00:27:14.634 "method": "sock_set_default_impl", 00:27:14.634 "params": { 00:27:14.634 "impl_name": "posix" 00:27:14.634 } 00:27:14.634 }, 00:27:14.634 { 00:27:14.634 "method": "sock_impl_set_options", 00:27:14.634 "params": { 00:27:14.634 "enable_ktls": false, 00:27:14.634 "enable_placement_id": 0, 00:27:14.634 "enable_quickack": false, 00:27:14.634 "enable_recv_pipe": true, 00:27:14.634 "enable_zerocopy_send_client": false, 00:27:14.634 "enable_zerocopy_send_server": true, 00:27:14.634 "impl_name": "ssl", 00:27:14.634 "recv_buf_size": 4096, 00:27:14.634 "send_buf_size": 4096, 00:27:14.634 "tls_version": 0, 00:27:14.634 "zerocopy_threshold": 0 00:27:14.634 } 00:27:14.634 }, 00:27:14.634 { 00:27:14.634 "method": "sock_impl_set_options", 00:27:14.634 "params": { 00:27:14.634 "enable_ktls": false, 00:27:14.634 "enable_placement_id": 0, 00:27:14.634 "enable_quickack": false, 00:27:14.634 "enable_recv_pipe": true, 00:27:14.634 "enable_zerocopy_send_client": false, 00:27:14.634 "enable_zerocopy_send_server": true, 00:27:14.634 "impl_name": "posix", 00:27:14.634 "recv_buf_size": 2097152, 00:27:14.634 "send_buf_size": 2097152, 00:27:14.634 "tls_version": 0, 00:27:14.634 "zerocopy_threshold": 0 00:27:14.634 } 00:27:14.634 } 00:27:14.634 ] 00:27:14.634 }, 00:27:14.634 { 00:27:14.634 "subsystem": "vmd", 00:27:14.634 "config": [] 00:27:14.634 }, 00:27:14.634 { 00:27:14.634 "subsystem": "accel", 00:27:14.634 "config": [ 00:27:14.634 { 00:27:14.634 "method": "accel_set_options", 00:27:14.634 "params": { 00:27:14.634 "buf_count": 2048, 00:27:14.634 "large_cache_size": 16, 00:27:14.634 "sequence_count": 2048, 00:27:14.634 "small_cache_size": 128, 00:27:14.634 "task_count": 2048 00:27:14.634 } 00:27:14.634 } 00:27:14.634 ] 00:27:14.634 }, 00:27:14.634 { 00:27:14.634 "subsystem": "bdev", 00:27:14.634 "config": [ 00:27:14.634 { 00:27:14.634 "method": "bdev_set_options", 00:27:14.634 "params": { 00:27:14.634 "bdev_auto_examine": true, 00:27:14.634 "bdev_io_cache_size": 256, 00:27:14.634 "bdev_io_pool_size": 65535, 00:27:14.634 "iobuf_large_cache_size": 16, 00:27:14.634 "iobuf_small_cache_size": 128 00:27:14.634 } 00:27:14.634 }, 00:27:14.634 { 00:27:14.634 "method": "bdev_raid_set_options", 00:27:14.634 "params": { 00:27:14.634 "process_max_bandwidth_mb_sec": 0, 00:27:14.634 "process_window_size_kb": 1024 00:27:14.634 } 00:27:14.634 }, 00:27:14.634 { 00:27:14.634 "method": "bdev_iscsi_set_options", 00:27:14.634 "params": { 00:27:14.634 "timeout_sec": 30 00:27:14.634 } 00:27:14.634 }, 00:27:14.634 { 00:27:14.634 "method": "bdev_nvme_set_options", 00:27:14.634 "params": { 00:27:14.634 "action_on_timeout": "none", 00:27:14.634 "allow_accel_sequence": false, 00:27:14.634 "arbitration_burst": 0, 00:27:14.634 "bdev_retry_count": 3, 00:27:14.634 "ctrlr_loss_timeout_sec": 0, 00:27:14.634 "delay_cmd_submit": true, 00:27:14.634 "dhchap_dhgroups": [ 00:27:14.634 "null", 00:27:14.634 "ffdhe2048", 00:27:14.634 "ffdhe3072", 00:27:14.634 "ffdhe4096", 00:27:14.634 "ffdhe6144", 00:27:14.634 "ffdhe8192" 00:27:14.634 ], 00:27:14.634 "dhchap_digests": [ 00:27:14.634 "sha256", 00:27:14.634 "sha384", 00:27:14.634 "sha512" 00:27:14.634 ], 00:27:14.634 "disable_auto_failback": false, 00:27:14.634 "fast_io_fail_timeout_sec": 0, 00:27:14.634 "generate_uuids": false, 00:27:14.634 "high_priority_weight": 0, 00:27:14.635 "io_path_stat": false, 00:27:14.635 "io_queue_requests": 512, 00:27:14.635 "keep_alive_timeout_ms": 10000, 00:27:14.635 "low_priority_weight": 0, 00:27:14.635 "medium_priority_weight": 0, 00:27:14.635 "nvme_adminq_poll_period_us": 10000, 00:27:14.635 "nvme_error_stat": false, 00:27:14.635 "nvme_ioq_poll_period_us": 0, 00:27:14.635 "rdma_cm_event_timeout_ms": 0, 00:27:14.635 "rdma_max_cq_size": 0, 00:27:14.635 "rdma_srq_size": 0, 00:27:14.635 "reconnect_delay_sec": 0, 00:27:14.635 "timeout_admin_us": 0, 00:27:14.635 "timeout_us": 0, 00:27:14.635 "transport_ack_timeout": 0, 00:27:14.635 "transport_retry_count": 4, 00:27:14.635 "transport_tos": 0 00:27:14.635 } 00:27:14.635 }, 00:27:14.635 { 00:27:14.635 "method": "bdev_nvme_attach_controller", 00:27:14.635 "params": { 00:27:14.635 "adrfam": "IPv4", 00:27:14.635 "ctrlr_loss_timeout_sec": 0, 00:27:14.635 "ddgst": false, 00:27:14.635 "fast_io_fail_timeout_sec": 0, 00:27:14.635 "hdgst": false, 00:27:14.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:14.635 "multipath": "multipath", 00:27:14.635 "name": "nvme0", 00:27:14.635 "prchk_guard": false, 00:27:14.635 "prchk_reftag": false, 00:27:14.635 "psk": "key0", 00:27:14.635 "reconnect_delay_sec": 0, 00:27:14.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:14.635 "traddr": "127.0.0.1", 00:27:14.635 "trsvcid": "4420", 00:27:14.635 "trtype": "TCP" 00:27:14.635 } 00:27:14.635 }, 00:27:14.635 { 00:27:14.635 "method": "bdev_nvme_set_hotplug", 00:27:14.635 "params": { 00:27:14.635 "enable": false, 00:27:14.635 "period_us": 100000 00:27:14.635 } 00:27:14.635 }, 00:27:14.635 { 00:27:14.635 "method": "bdev_wait_for_examine" 00:27:14.635 } 00:27:14.635 ] 00:27:14.635 }, 00:27:14.635 { 00:27:14.635 "subsystem": "nbd", 00:27:14.635 "config": [] 00:27:14.635 } 00:27:14.635 ] 00:27:14.635 }' 00:27:14.635 12:30:45 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:14.635 12:30:45 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:14.635 12:30:45 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.635 12:30:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:14.635 [2024-12-05 12:30:45.330835] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:27:14.635 [2024-12-05 12:30:45.330948] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110391 ] 00:27:14.635 [2024-12-05 12:30:45.473253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.893 [2024-12-05 12:30:45.520712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.893 [2024-12-05 12:30:45.729364] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:15.459 12:30:46 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.459 12:30:46 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:27:15.459 12:30:46 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:27:15.459 12:30:46 keyring_file -- keyring/file.sh@121 -- # jq length 00:27:15.459 12:30:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.716 12:30:46 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:15.716 12:30:46 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:27:15.716 12:30:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:15.716 12:30:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:15.716 12:30:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:15.716 12:30:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:15.716 12:30:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:15.974 12:30:46 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:27:15.974 12:30:46 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:27:15.974 12:30:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:15.974 12:30:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:15.974 12:30:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:15.974 12:30:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:15.974 12:30:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:16.232 12:30:47 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:27:16.232 12:30:47 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:27:16.232 12:30:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:16.232 12:30:47 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:27:16.491 12:30:47 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:27:16.491 12:30:47 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:16.491 12:30:47 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dIYNTbJ7ys /tmp/tmp.hY9JO5UCaL 00:27:16.491 12:30:47 keyring_file -- keyring/file.sh@20 -- # killprocess 110391 00:27:16.491 12:30:47 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 110391 ']' 00:27:16.491 12:30:47 keyring_file -- common/autotest_common.sh@958 -- # kill -0 110391 00:27:16.491 12:30:47 keyring_file -- common/autotest_common.sh@959 -- # uname 00:27:16.491 12:30:47 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:16.491 12:30:47 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110391 00:27:16.749 killing process with pid 110391 00:27:16.749 Received shutdown signal, test time was about 1.000000 seconds 00:27:16.749 00:27:16.749 Latency(us) 00:27:16.749 [2024-12-05T12:30:47.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.749 [2024-12-05T12:30:47.618Z] =================================================================================================================== 00:27:16.749 [2024-12-05T12:30:47.618Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:16.749 12:30:47 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:16.749 12:30:47 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:16.749 12:30:47 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110391' 00:27:16.749 12:30:47 keyring_file -- common/autotest_common.sh@973 -- # kill 110391 00:27:16.749 12:30:47 keyring_file -- common/autotest_common.sh@978 -- # wait 110391 00:27:16.749 12:30:47 keyring_file -- keyring/file.sh@21 -- # killprocess 109886 00:27:16.749 12:30:47 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 109886 ']' 00:27:16.749 12:30:47 keyring_file -- common/autotest_common.sh@958 -- # kill -0 109886 00:27:16.749 12:30:47 keyring_file -- common/autotest_common.sh@959 -- # uname 00:27:16.749 12:30:47 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:16.749 12:30:47 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109886 00:27:17.007 killing process with pid 109886 00:27:17.007 12:30:47 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:17.007 12:30:47 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:17.007 12:30:47 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109886' 00:27:17.007 12:30:47 keyring_file -- common/autotest_common.sh@973 -- # kill 109886 00:27:17.007 12:30:47 keyring_file -- common/autotest_common.sh@978 -- # wait 109886 00:27:17.265 00:27:17.265 real 0m15.926s 00:27:17.265 user 0m38.988s 00:27:17.265 sys 0m3.413s 00:27:17.265 12:30:47 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.266 12:30:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:17.266 ************************************ 00:27:17.266 END TEST keyring_file 00:27:17.266 ************************************ 00:27:17.266 12:30:48 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:27:17.266 12:30:48 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:27:17.266 12:30:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:17.266 12:30:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:17.266 12:30:48 -- common/autotest_common.sh@10 -- # set +x 00:27:17.266 ************************************ 00:27:17.266 START TEST keyring_linux 00:27:17.266 ************************************ 00:27:17.266 12:30:48 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:27:17.266 Joined session keyring: 671467239 00:27:17.266 * Looking for test storage... 00:27:17.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:27:17.527 12:30:48 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:17.527 12:30:48 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:27:17.527 12:30:48 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:17.527 12:30:48 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@345 -- # : 1 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.527 12:30:48 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.528 12:30:48 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.528 12:30:48 keyring_linux -- scripts/common.sh@368 -- # return 0 00:27:17.528 12:30:48 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.528 12:30:48 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:17.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.528 --rc genhtml_branch_coverage=1 00:27:17.528 --rc genhtml_function_coverage=1 00:27:17.528 --rc genhtml_legend=1 00:27:17.528 --rc geninfo_all_blocks=1 00:27:17.528 --rc geninfo_unexecuted_blocks=1 00:27:17.528 00:27:17.528 ' 00:27:17.528 12:30:48 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:17.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.528 --rc genhtml_branch_coverage=1 00:27:17.528 --rc genhtml_function_coverage=1 00:27:17.528 --rc genhtml_legend=1 00:27:17.528 --rc geninfo_all_blocks=1 00:27:17.528 --rc geninfo_unexecuted_blocks=1 00:27:17.528 00:27:17.528 ' 00:27:17.528 12:30:48 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:17.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.528 --rc genhtml_branch_coverage=1 00:27:17.528 --rc genhtml_function_coverage=1 00:27:17.528 --rc genhtml_legend=1 00:27:17.528 --rc geninfo_all_blocks=1 00:27:17.528 --rc geninfo_unexecuted_blocks=1 00:27:17.528 00:27:17.528 ' 00:27:17.528 12:30:48 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:17.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.528 --rc genhtml_branch_coverage=1 00:27:17.528 --rc genhtml_function_coverage=1 00:27:17.528 --rc genhtml_legend=1 00:27:17.528 --rc geninfo_all_blocks=1 00:27:17.528 --rc geninfo_unexecuted_blocks=1 00:27:17.528 00:27:17.528 ' 00:27:17.528 12:30:48 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:27:17.528 12:30:48 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=bd2c7f89-972d-42a2-ab59-8d5d886644e3 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:17.528 12:30:48 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:27:17.528 12:30:48 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.528 12:30:48 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.528 12:30:48 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.528 12:30:48 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.528 12:30:48 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.528 12:30:48 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.528 12:30:48 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:17.528 12:30:48 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:17.528 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:17.528 12:30:48 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:17.528 12:30:48 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:17.528 12:30:48 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:17.528 12:30:48 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:17.528 12:30:48 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:17.528 12:30:48 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:17.528 12:30:48 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:17.528 12:30:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:17.528 12:30:48 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:17.528 12:30:48 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:17.528 12:30:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:17.528 12:30:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:17.528 12:30:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:27:17.528 12:30:48 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:27:17.529 12:30:48 keyring_linux -- nvmf/common.sh@733 -- # python - 00:27:17.529 12:30:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:17.529 12:30:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:17.529 /tmp/:spdk-test:key0 00:27:17.529 12:30:48 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:17.529 12:30:48 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:17.529 12:30:48 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:17.529 12:30:48 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:17.529 12:30:48 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:17.529 12:30:48 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:17.529 12:30:48 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:17.529 12:30:48 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:17.529 12:30:48 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:27:17.529 12:30:48 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:27:17.529 12:30:48 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:27:17.529 12:30:48 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:27:17.529 12:30:48 keyring_linux -- nvmf/common.sh@733 -- # python - 00:27:17.529 12:30:48 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:17.529 /tmp/:spdk-test:key1 00:27:17.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.529 12:30:48 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:17.529 12:30:48 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=110553 00:27:17.529 12:30:48 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:17.529 12:30:48 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 110553 00:27:17.529 12:30:48 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 110553 ']' 00:27:17.529 12:30:48 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.529 12:30:48 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:17.529 12:30:48 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.529 12:30:48 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:17.529 12:30:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:17.801 [2024-12-05 12:30:48.443514] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:27:17.801 [2024-12-05 12:30:48.443851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110553 ] 00:27:17.801 [2024-12-05 12:30:48.588295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.801 [2024-12-05 12:30:48.632843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.078 12:30:48 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.078 12:30:48 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:27:18.078 12:30:48 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:18.078 12:30:48 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.078 12:30:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:18.078 [2024-12-05 12:30:48.899239] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.078 null0 00:27:18.078 [2024-12-05 12:30:48.931261] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:18.078 [2024-12-05 12:30:48.931644] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:18.343 12:30:48 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.343 12:30:48 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:18.343 1041390339 00:27:18.343 12:30:48 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:18.343 412838260 00:27:18.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:18.343 12:30:48 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=110570 00:27:18.343 12:30:48 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 110570 /var/tmp/bperf.sock 00:27:18.343 12:30:48 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:18.343 12:30:48 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 110570 ']' 00:27:18.343 12:30:48 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:18.343 12:30:48 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.343 12:30:48 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:18.343 12:30:48 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.343 12:30:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:18.343 [2024-12-05 12:30:49.005358] Starting SPDK v25.01-pre git sha1 85bc1e85a / DPDK 24.03.0 initialization... 00:27:18.343 [2024-12-05 12:30:49.005421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110570 ] 00:27:18.343 [2024-12-05 12:30:49.144310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.343 [2024-12-05 12:30:49.198049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.603 12:30:49 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.603 12:30:49 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:27:18.603 12:30:49 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:18.603 12:30:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:18.862 12:30:49 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:18.862 12:30:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:19.119 12:30:49 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:19.119 12:30:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:19.376 [2024-12-05 12:30:50.157884] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:19.376 nvme0n1 00:27:19.633 12:30:50 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:19.633 12:30:50 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:19.633 12:30:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:19.633 12:30:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:19.633 12:30:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:19.633 12:30:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:19.891 12:30:50 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:19.891 12:30:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:19.891 12:30:50 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:19.891 12:30:50 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:19.891 12:30:50 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:19.891 12:30:50 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:19.891 12:30:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:20.149 12:30:50 keyring_linux -- keyring/linux.sh@25 -- # sn=1041390339 00:27:20.149 12:30:50 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:20.149 12:30:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:20.149 12:30:50 keyring_linux -- keyring/linux.sh@26 -- # [[ 1041390339 == \1\0\4\1\3\9\0\3\3\9 ]] 00:27:20.149 12:30:50 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1041390339 00:27:20.149 12:30:50 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:20.149 12:30:50 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:20.149 Running I/O for 1 seconds... 00:27:21.524 13025.00 IOPS, 50.88 MiB/s 00:27:21.524 Latency(us) 00:27:21.524 [2024-12-05T12:30:52.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:21.524 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:21.524 nvme0n1 : 1.01 13020.68 50.86 0.00 0.00 9780.20 2666.12 12451.84 00:27:21.524 [2024-12-05T12:30:52.393Z] =================================================================================================================== 00:27:21.524 [2024-12-05T12:30:52.393Z] Total : 13020.68 50.86 0.00 0.00 9780.20 2666.12 12451.84 00:27:21.524 { 00:27:21.524 "results": [ 00:27:21.524 { 00:27:21.524 "job": "nvme0n1", 00:27:21.524 "core_mask": "0x2", 00:27:21.524 "workload": "randread", 00:27:21.524 "status": "finished", 00:27:21.524 "queue_depth": 128, 00:27:21.524 "io_size": 4096, 00:27:21.524 "runtime": 1.010162, 00:27:21.524 "iops": 13020.683811111485, 00:27:21.524 "mibps": 50.86204613715424, 00:27:21.524 "io_failed": 0, 00:27:21.524 "io_timeout": 0, 00:27:21.524 "avg_latency_us": 9780.201542959436, 00:27:21.524 "min_latency_us": 2666.1236363636363, 00:27:21.524 "max_latency_us": 12451.84 00:27:21.524 } 00:27:21.524 ], 00:27:21.524 "core_count": 1 00:27:21.524 } 00:27:21.524 12:30:51 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:21.524 12:30:51 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:21.524 12:30:52 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:21.524 12:30:52 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:21.524 12:30:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:21.524 12:30:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:21.524 12:30:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:21.524 12:30:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:21.782 12:30:52 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:21.782 12:30:52 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:21.782 12:30:52 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:21.782 12:30:52 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:21.782 12:30:52 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:27:21.782 12:30:52 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:21.782 12:30:52 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:27:21.782 12:30:52 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:21.782 12:30:52 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:27:21.782 12:30:52 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:21.782 12:30:52 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:21.782 12:30:52 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:22.041 [2024-12-05 12:30:52.834570] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:22.041 [2024-12-05 12:30:52.835483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97a4b0 (107): Transport endpoint is not connected 00:27:22.041 [2024-12-05 12:30:52.836462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97a4b0 (9): Bad file descriptor 00:27:22.041 [2024-12-05 12:30:52.837459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:27:22.041 [2024-12-05 12:30:52.837715] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:22.041 [2024-12-05 12:30:52.838053] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:27:22.041 [2024-12-05 12:30:52.838258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:27:22.041 2024/12/05 12:30:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:27:22.041 request: 00:27:22.041 { 00:27:22.041 "method": "bdev_nvme_attach_controller", 00:27:22.041 "params": { 00:27:22.041 "name": "nvme0", 00:27:22.041 "trtype": "tcp", 00:27:22.041 "traddr": "127.0.0.1", 00:27:22.041 "adrfam": "ipv4", 00:27:22.041 "trsvcid": "4420", 00:27:22.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:22.041 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:22.041 "prchk_reftag": false, 00:27:22.041 "prchk_guard": false, 00:27:22.041 "hdgst": false, 00:27:22.041 "ddgst": false, 00:27:22.041 "psk": ":spdk-test:key1", 00:27:22.041 "allow_unrecognized_csi": false 00:27:22.041 } 00:27:22.041 } 00:27:22.041 Got JSON-RPC error response 00:27:22.041 GoRPCClient: error on JSON-RPC call 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@33 -- # sn=1041390339 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1041390339 00:27:22.041 1 links removed 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@33 -- # sn=412838260 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 412838260 00:27:22.041 1 links removed 00:27:22.041 12:30:52 keyring_linux -- keyring/linux.sh@41 -- # killprocess 110570 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 110570 ']' 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 110570 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110570 00:27:22.041 killing process with pid 110570 00:27:22.041 Received shutdown signal, test time was about 1.000000 seconds 00:27:22.041 00:27:22.041 Latency(us) 00:27:22.041 [2024-12-05T12:30:52.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.041 [2024-12-05T12:30:52.910Z] =================================================================================================================== 00:27:22.041 [2024-12-05T12:30:52.910Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110570' 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@973 -- # kill 110570 00:27:22.041 12:30:52 keyring_linux -- common/autotest_common.sh@978 -- # wait 110570 00:27:22.300 12:30:53 keyring_linux -- keyring/linux.sh@42 -- # killprocess 110553 00:27:22.300 12:30:53 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 110553 ']' 00:27:22.300 12:30:53 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 110553 00:27:22.300 12:30:53 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:27:22.300 12:30:53 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.300 12:30:53 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110553 00:27:22.300 killing process with pid 110553 00:27:22.300 12:30:53 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:22.300 12:30:53 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:22.300 12:30:53 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110553' 00:27:22.300 12:30:53 keyring_linux -- common/autotest_common.sh@973 -- # kill 110553 00:27:22.300 12:30:53 keyring_linux -- common/autotest_common.sh@978 -- # wait 110553 00:27:22.867 ************************************ 00:27:22.867 END TEST keyring_linux 00:27:22.867 ************************************ 00:27:22.867 00:27:22.867 real 0m5.572s 00:27:22.867 user 0m10.781s 00:27:22.867 sys 0m1.617s 00:27:22.867 12:30:53 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.867 12:30:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:22.867 12:30:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:22.867 12:30:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:27:22.867 12:30:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:27:22.867 12:30:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:27:22.867 12:30:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:22.867 12:30:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:27:22.867 12:30:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:22.867 12:30:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:22.867 12:30:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:22.867 12:30:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:27:22.867 12:30:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:27:22.867 12:30:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:27:22.867 12:30:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:27:22.867 12:30:53 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:27:22.867 12:30:53 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:27:22.867 12:30:53 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:27:22.867 12:30:53 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:27:22.867 12:30:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.867 12:30:53 -- common/autotest_common.sh@10 -- # set +x 00:27:22.867 12:30:53 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:27:22.867 12:30:53 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:27:22.867 12:30:53 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:27:22.867 12:30:53 -- common/autotest_common.sh@10 -- # set +x 00:27:25.396 INFO: APP EXITING 00:27:25.396 INFO: killing all VMs 00:27:25.396 INFO: killing vhost app 00:27:25.396 INFO: EXIT DONE 00:27:25.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:25.653 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:25.654 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:26.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:26.588 Cleaning 00:27:26.588 Removing: /var/run/dpdk/spdk0/config 00:27:26.588 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:26.588 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:26.588 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:26.588 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:26.588 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:26.588 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:26.588 Removing: /var/run/dpdk/spdk1/config 00:27:26.588 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:26.588 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:26.588 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:26.588 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:26.588 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:26.588 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:26.588 Removing: /var/run/dpdk/spdk2/config 00:27:26.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:26.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:26.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:26.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:26.588 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:26.588 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:26.588 Removing: /var/run/dpdk/spdk3/config 00:27:26.588 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:26.588 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:26.588 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:26.588 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:26.588 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:26.588 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:26.588 Removing: /var/run/dpdk/spdk4/config 00:27:26.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:26.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:26.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:26.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:26.588 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:26.588 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:26.588 Removing: /dev/shm/nvmf_trace.0 00:27:26.588 Removing: /dev/shm/spdk_tgt_trace.pid58426 00:27:26.588 Removing: /var/run/dpdk/spdk0 00:27:26.588 Removing: /var/run/dpdk/spdk1 00:27:26.588 Removing: /var/run/dpdk/spdk2 00:27:26.588 Removing: /var/run/dpdk/spdk3 00:27:26.588 Removing: /var/run/dpdk/spdk4 00:27:26.588 Removing: /var/run/dpdk/spdk_pid100432 00:27:26.588 Removing: /var/run/dpdk/spdk_pid100478 00:27:26.588 Removing: /var/run/dpdk/spdk_pid100827 00:27:26.588 Removing: /var/run/dpdk/spdk_pid100869 00:27:26.588 Removing: /var/run/dpdk/spdk_pid101276 00:27:26.588 Removing: /var/run/dpdk/spdk_pid101839 00:27:26.588 Removing: /var/run/dpdk/spdk_pid102255 00:27:26.588 Removing: /var/run/dpdk/spdk_pid103271 00:27:26.588 Removing: /var/run/dpdk/spdk_pid104294 00:27:26.588 Removing: /var/run/dpdk/spdk_pid104401 00:27:26.588 Removing: /var/run/dpdk/spdk_pid104469 00:27:26.588 Removing: /var/run/dpdk/spdk_pid106039 00:27:26.588 Removing: /var/run/dpdk/spdk_pid106345 00:27:26.588 Removing: /var/run/dpdk/spdk_pid106687 00:27:26.588 Removing: /var/run/dpdk/spdk_pid107232 00:27:26.588 Removing: /var/run/dpdk/spdk_pid107244 00:27:26.588 Removing: /var/run/dpdk/spdk_pid107639 00:27:26.588 Removing: /var/run/dpdk/spdk_pid107800 00:27:26.588 Removing: /var/run/dpdk/spdk_pid107957 00:27:26.588 Removing: /var/run/dpdk/spdk_pid108050 00:27:26.588 Removing: /var/run/dpdk/spdk_pid108207 00:27:26.588 Removing: /var/run/dpdk/spdk_pid108316 00:27:26.588 Removing: /var/run/dpdk/spdk_pid109027 00:27:26.588 Removing: /var/run/dpdk/spdk_pid109067 00:27:26.588 Removing: /var/run/dpdk/spdk_pid109098 00:27:26.588 Removing: /var/run/dpdk/spdk_pid109353 00:27:26.588 Removing: /var/run/dpdk/spdk_pid109387 00:27:26.588 Removing: /var/run/dpdk/spdk_pid109418 00:27:26.846 Removing: /var/run/dpdk/spdk_pid109886 00:27:26.846 Removing: /var/run/dpdk/spdk_pid109921 00:27:26.846 Removing: /var/run/dpdk/spdk_pid110391 00:27:26.846 Removing: /var/run/dpdk/spdk_pid110553 00:27:26.846 Removing: /var/run/dpdk/spdk_pid110570 00:27:26.846 Removing: /var/run/dpdk/spdk_pid58272 00:27:26.846 Removing: /var/run/dpdk/spdk_pid58426 00:27:26.846 Removing: /var/run/dpdk/spdk_pid58687 00:27:26.846 Removing: /var/run/dpdk/spdk_pid58774 00:27:26.846 Removing: /var/run/dpdk/spdk_pid58805 00:27:26.846 Removing: /var/run/dpdk/spdk_pid58910 00:27:26.846 Removing: /var/run/dpdk/spdk_pid58945 00:27:26.846 Removing: /var/run/dpdk/spdk_pid59079 00:27:26.846 Removing: /var/run/dpdk/spdk_pid59364 00:27:26.846 Removing: /var/run/dpdk/spdk_pid59542 00:27:26.846 Removing: /var/run/dpdk/spdk_pid59627 00:27:26.846 Removing: /var/run/dpdk/spdk_pid59719 00:27:26.846 Removing: /var/run/dpdk/spdk_pid59803 00:27:26.846 Removing: /var/run/dpdk/spdk_pid59836 00:27:26.846 Removing: /var/run/dpdk/spdk_pid59871 00:27:26.846 Removing: /var/run/dpdk/spdk_pid59941 00:27:26.846 Removing: /var/run/dpdk/spdk_pid60045 00:27:26.846 Removing: /var/run/dpdk/spdk_pid60664 00:27:26.846 Removing: /var/run/dpdk/spdk_pid60720 00:27:26.846 Removing: /var/run/dpdk/spdk_pid60770 00:27:26.846 Removing: /var/run/dpdk/spdk_pid60785 00:27:26.846 Removing: /var/run/dpdk/spdk_pid60864 00:27:26.846 Removing: /var/run/dpdk/spdk_pid60878 00:27:26.846 Removing: /var/run/dpdk/spdk_pid60957 00:27:26.846 Removing: /var/run/dpdk/spdk_pid60972 00:27:26.846 Removing: /var/run/dpdk/spdk_pid61023 00:27:26.847 Removing: /var/run/dpdk/spdk_pid61053 00:27:26.847 Removing: /var/run/dpdk/spdk_pid61105 00:27:26.847 Removing: /var/run/dpdk/spdk_pid61135 00:27:26.847 Removing: /var/run/dpdk/spdk_pid61289 00:27:26.847 Removing: /var/run/dpdk/spdk_pid61325 00:27:26.847 Removing: /var/run/dpdk/spdk_pid61407 00:27:26.847 Removing: /var/run/dpdk/spdk_pid61866 00:27:26.847 Removing: /var/run/dpdk/spdk_pid62223 00:27:26.847 Removing: /var/run/dpdk/spdk_pid64737 00:27:26.847 Removing: /var/run/dpdk/spdk_pid64788 00:27:26.847 Removing: /var/run/dpdk/spdk_pid65137 00:27:26.847 Removing: /var/run/dpdk/spdk_pid65187 00:27:26.847 Removing: /var/run/dpdk/spdk_pid65607 00:27:26.847 Removing: /var/run/dpdk/spdk_pid66176 00:27:26.847 Removing: /var/run/dpdk/spdk_pid66621 00:27:26.847 Removing: /var/run/dpdk/spdk_pid67685 00:27:26.847 Removing: /var/run/dpdk/spdk_pid68752 00:27:26.847 Removing: /var/run/dpdk/spdk_pid68875 00:27:26.847 Removing: /var/run/dpdk/spdk_pid68938 00:27:26.847 Removing: /var/run/dpdk/spdk_pid70527 00:27:26.847 Removing: /var/run/dpdk/spdk_pid70867 00:27:26.847 Removing: /var/run/dpdk/spdk_pid74715 00:27:26.847 Removing: /var/run/dpdk/spdk_pid75132 00:27:26.847 Removing: /var/run/dpdk/spdk_pid75736 00:27:26.847 Removing: /var/run/dpdk/spdk_pid76248 00:27:26.847 Removing: /var/run/dpdk/spdk_pid81965 00:27:26.847 Removing: /var/run/dpdk/spdk_pid82444 00:27:26.847 Removing: /var/run/dpdk/spdk_pid82553 00:27:26.847 Removing: /var/run/dpdk/spdk_pid82694 00:27:26.847 Removing: /var/run/dpdk/spdk_pid82733 00:27:26.847 Removing: /var/run/dpdk/spdk_pid82772 00:27:26.847 Removing: /var/run/dpdk/spdk_pid82811 00:27:26.847 Removing: /var/run/dpdk/spdk_pid82956 00:27:26.847 Removing: /var/run/dpdk/spdk_pid83102 00:27:26.847 Removing: /var/run/dpdk/spdk_pid83352 00:27:26.847 Removing: /var/run/dpdk/spdk_pid83476 00:27:26.847 Removing: /var/run/dpdk/spdk_pid83724 00:27:26.847 Removing: /var/run/dpdk/spdk_pid83849 00:27:26.847 Removing: /var/run/dpdk/spdk_pid83985 00:27:26.847 Removing: /var/run/dpdk/spdk_pid84386 00:27:26.847 Removing: /var/run/dpdk/spdk_pid84817 00:27:26.847 Removing: /var/run/dpdk/spdk_pid84818 00:27:26.847 Removing: /var/run/dpdk/spdk_pid84819 00:27:26.847 Removing: /var/run/dpdk/spdk_pid85097 00:27:26.847 Removing: /var/run/dpdk/spdk_pid85364 00:27:26.847 Removing: /var/run/dpdk/spdk_pid85782 00:27:26.847 Removing: /var/run/dpdk/spdk_pid86141 00:27:26.847 Removing: /var/run/dpdk/spdk_pid86723 00:27:26.847 Removing: /var/run/dpdk/spdk_pid86725 00:27:26.847 Removing: /var/run/dpdk/spdk_pid87109 00:27:26.847 Removing: /var/run/dpdk/spdk_pid87123 00:27:26.847 Removing: /var/run/dpdk/spdk_pid87137 00:27:26.847 Removing: /var/run/dpdk/spdk_pid87168 00:27:27.105 Removing: /var/run/dpdk/spdk_pid87177 00:27:27.105 Removing: /var/run/dpdk/spdk_pid87566 00:27:27.105 Removing: /var/run/dpdk/spdk_pid87615 00:27:27.105 Removing: /var/run/dpdk/spdk_pid87994 00:27:27.105 Removing: /var/run/dpdk/spdk_pid88232 00:27:27.105 Removing: /var/run/dpdk/spdk_pid88770 00:27:27.105 Removing: /var/run/dpdk/spdk_pid89381 00:27:27.105 Removing: /var/run/dpdk/spdk_pid90773 00:27:27.105 Removing: /var/run/dpdk/spdk_pid91410 00:27:27.105 Removing: /var/run/dpdk/spdk_pid91417 00:27:27.105 Removing: /var/run/dpdk/spdk_pid93469 00:27:27.105 Removing: /var/run/dpdk/spdk_pid93544 00:27:27.105 Removing: /var/run/dpdk/spdk_pid93621 00:27:27.105 Removing: /var/run/dpdk/spdk_pid93708 00:27:27.105 Removing: /var/run/dpdk/spdk_pid93838 00:27:27.105 Removing: /var/run/dpdk/spdk_pid93915 00:27:27.105 Removing: /var/run/dpdk/spdk_pid93988 00:27:27.105 Removing: /var/run/dpdk/spdk_pid94065 00:27:27.105 Removing: /var/run/dpdk/spdk_pid94455 00:27:27.105 Removing: /var/run/dpdk/spdk_pid95201 00:27:27.105 Removing: /var/run/dpdk/spdk_pid96604 00:27:27.105 Removing: /var/run/dpdk/spdk_pid96791 00:27:27.105 Removing: /var/run/dpdk/spdk_pid97067 00:27:27.105 Removing: /var/run/dpdk/spdk_pid97615 00:27:27.105 Removing: /var/run/dpdk/spdk_pid97987 00:27:27.105 Clean 00:27:27.105 12:30:57 -- common/autotest_common.sh@1453 -- # return 0 00:27:27.105 12:30:57 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:27:27.105 12:30:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:27.105 12:30:57 -- common/autotest_common.sh@10 -- # set +x 00:27:27.105 12:30:57 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:27:27.105 12:30:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:27.105 12:30:57 -- common/autotest_common.sh@10 -- # set +x 00:27:27.105 12:30:57 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:27.105 12:30:57 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:27.105 12:30:57 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:27.105 12:30:57 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:27:27.105 12:30:57 -- spdk/autotest.sh@398 -- # hostname 00:27:27.105 12:30:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:27.364 geninfo: WARNING: invalid characters removed from testname! 00:27:49.298 12:31:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:51.202 12:31:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:53.730 12:31:24 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:56.259 12:31:26 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:58.788 12:31:29 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:00.683 12:31:31 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:03.252 12:31:33 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:03.252 12:31:33 -- spdk/autorun.sh@1 -- $ timing_finish 00:28:03.252 12:31:33 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:28:03.252 12:31:33 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:03.252 12:31:33 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:28:03.252 12:31:33 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:03.252 + [[ -n 5263 ]] 00:28:03.252 + sudo kill 5263 00:28:03.260 [Pipeline] } 00:28:03.275 [Pipeline] // timeout 00:28:03.280 [Pipeline] } 00:28:03.295 [Pipeline] // stage 00:28:03.300 [Pipeline] } 00:28:03.314 [Pipeline] // catchError 00:28:03.321 [Pipeline] stage 00:28:03.322 [Pipeline] { (Stop VM) 00:28:03.332 [Pipeline] sh 00:28:03.612 + vagrant halt 00:28:06.892 ==> default: Halting domain... 00:28:13.460 [Pipeline] sh 00:28:13.738 + vagrant destroy -f 00:28:17.041 ==> default: Removing domain... 00:28:17.052 [Pipeline] sh 00:28:17.332 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:28:17.341 [Pipeline] } 00:28:17.356 [Pipeline] // stage 00:28:17.359 [Pipeline] } 00:28:17.367 [Pipeline] // dir 00:28:17.371 [Pipeline] } 00:28:17.381 [Pipeline] // wrap 00:28:17.387 [Pipeline] } 00:28:17.395 [Pipeline] // catchError 00:28:17.402 [Pipeline] stage 00:28:17.404 [Pipeline] { (Epilogue) 00:28:17.412 [Pipeline] sh 00:28:17.689 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:24.258 [Pipeline] catchError 00:28:24.261 [Pipeline] { 00:28:24.273 [Pipeline] sh 00:28:24.551 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:24.808 Artifacts sizes are good 00:28:24.817 [Pipeline] } 00:28:24.830 [Pipeline] // catchError 00:28:24.840 [Pipeline] archiveArtifacts 00:28:24.845 Archiving artifacts 00:28:24.989 [Pipeline] cleanWs 00:28:25.005 [WS-CLEANUP] Deleting project workspace... 00:28:25.005 [WS-CLEANUP] Deferred wipeout is used... 00:28:25.034 [WS-CLEANUP] done 00:28:25.036 [Pipeline] } 00:28:25.054 [Pipeline] // stage 00:28:25.059 [Pipeline] } 00:28:25.071 [Pipeline] // node 00:28:25.075 [Pipeline] End of Pipeline 00:28:25.101 Finished: SUCCESS